title
stringlengths
13
150
url
stringlengths
97
97
authors
stringlengths
8
467
detail_url
stringlengths
97
97
tags
stringclasses
1 value
AuthorFeedback
stringlengths
102
102
Bibtex
stringlengths
53
54
MetaReview
stringlengths
99
99
Paper
stringlengths
93
93
Review
stringlengths
95
95
Supplemental
stringlengths
100
100
abstract
stringlengths
53
2k
Online Influence Maximization under Linear Threshold Model
https://papers.nips.cc/paper_files/paper/2020/hash/0d352b4d3a317e3eae221199fdb49651-Abstract.html
Shuai Li, Fang Kong, Kejie Tang, Qizhi Li, Wei Chen
https://papers.nips.cc/paper_files/paper/2020/hash/0d352b4d3a317e3eae221199fdb49651-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/0d352b4d3a317e3eae221199fdb49651-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9825-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/0d352b4d3a317e3eae221199fdb49651-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/0d352b4d3a317e3eae221199fdb49651-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/0d352b4d3a317e3eae221199fdb49651-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/0d352b4d3a317e3eae221199fdb49651-Supplemental.pdf
Online influence maximization (OIM) is a popular problem in social networks to learn influence propagation model parameters and maximize the influence spread at the same time. Most previous studies focus on the independent cascade (IC) model under the edge-level feedback. In this paper, we address OIM in the linear thr...
Ensembling geophysical models with Bayesian Neural Networks
https://papers.nips.cc/paper_files/paper/2020/hash/0d5501edb21a59a43435efa67f200828-Abstract.html
Ushnish Sengupta, Matt Amos, Scott Hosking, Carl Edward Rasmussen, Matthew Juniper, Paul Young
https://papers.nips.cc/paper_files/paper/2020/hash/0d5501edb21a59a43435efa67f200828-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/0d5501edb21a59a43435efa67f200828-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9826-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/0d5501edb21a59a43435efa67f200828-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/0d5501edb21a59a43435efa67f200828-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/0d5501edb21a59a43435efa67f200828-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/0d5501edb21a59a43435efa67f200828-Supplemental.pdf
Ensembles of geophysical models improve projection accuracy and express uncertainties. We develop a novel data-driven ensembling strategy for combining geophysical models using Bayesian Neural Networks, which infers spatiotemporally varying model weights and bias while accounting for heteroscedastic uncertainties in th...
Delving into the Cyclic Mechanism in Semi-supervised Video Object Segmentation
https://papers.nips.cc/paper_files/paper/2020/hash/0d5bd023a3ee11c7abca5b42a93c4866-Abstract.html
Yuxi Li, Ning Xu, Jinlong Peng, John See, Weiyao Lin
https://papers.nips.cc/paper_files/paper/2020/hash/0d5bd023a3ee11c7abca5b42a93c4866-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/0d5bd023a3ee11c7abca5b42a93c4866-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9827-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/0d5bd023a3ee11c7abca5b42a93c4866-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/0d5bd023a3ee11c7abca5b42a93c4866-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/0d5bd023a3ee11c7abca5b42a93c4866-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/0d5bd023a3ee11c7abca5b42a93c4866-Supplemental.pdf
In this paper, we take attempt to incorporate the cyclic mechanism with the vision task of semi-supervised video object segmentation. By resorting to the accurate reference mask of the first frame, we try to mitigate the error propagation problem in most of current video object segmentation pipelines. Firstly, we propo...
Asymmetric Shapley values: incorporating causal knowledge into model-agnostic explainability
https://papers.nips.cc/paper_files/paper/2020/hash/0d770c496aa3da6d2c3f2bd19e7b9d6b-Abstract.html
Christopher Frye, Colin Rowat, Ilya Feige
https://papers.nips.cc/paper_files/paper/2020/hash/0d770c496aa3da6d2c3f2bd19e7b9d6b-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/0d770c496aa3da6d2c3f2bd19e7b9d6b-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9828-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/0d770c496aa3da6d2c3f2bd19e7b9d6b-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/0d770c496aa3da6d2c3f2bd19e7b9d6b-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/0d770c496aa3da6d2c3f2bd19e7b9d6b-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/0d770c496aa3da6d2c3f2bd19e7b9d6b-Supplemental.pdf
Explaining AI systems is fundamental both to the development of high performing models and to the trust placed in them by their users. The Shapley framework for explainability has strength in its general applicability combined with its precise, rigorous foundation: it provides a common, model-agnostic language for AI e...
Understanding Deep Architecture with Reasoning Layer
https://papers.nips.cc/paper_files/paper/2020/hash/0d82627e10660af39ea7eb69c3568955-Abstract.html
Xinshi Chen, Yufei Zhang, Christoph Reisinger, Le Song
https://papers.nips.cc/paper_files/paper/2020/hash/0d82627e10660af39ea7eb69c3568955-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/0d82627e10660af39ea7eb69c3568955-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9829-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/0d82627e10660af39ea7eb69c3568955-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/0d82627e10660af39ea7eb69c3568955-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/0d82627e10660af39ea7eb69c3568955-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/0d82627e10660af39ea7eb69c3568955-Supplemental.zip
Recently, there is a surge of interest in combining deep learning models with reasoning in order to handle more sophisticated learning tasks. In many cases, a reasoning task can be solved by an iterative algorithm. This algorithm is often unrolled, truncated, and used as a specialized layer in the deep architecture, wh...
Planning in Markov Decision Processes with Gap-Dependent Sample Complexity
https://papers.nips.cc/paper_files/paper/2020/hash/0d85eb24e2add96ff1a7021f83c1abc9-Abstract.html
Anders Jonsson, Emilie Kaufmann, Pierre Menard, Omar Darwiche Domingues, Edouard Leurent, Michal Valko
https://papers.nips.cc/paper_files/paper/2020/hash/0d85eb24e2add96ff1a7021f83c1abc9-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/0d85eb24e2add96ff1a7021f83c1abc9-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9830-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/0d85eb24e2add96ff1a7021f83c1abc9-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/0d85eb24e2add96ff1a7021f83c1abc9-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/0d85eb24e2add96ff1a7021f83c1abc9-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/0d85eb24e2add96ff1a7021f83c1abc9-Supplemental.pdf
We propose MDP-GapE, a new trajectory-based Monte-Carlo Tree Search algorithm for planning in a Markov Decision Process in which transitions have a finite support. We prove an upper bound on the number of sampled trajectories needed for MDP-GapE to identify a near-optimal action with high probability. This problem-depe...
Provably Good Batch Off-Policy Reinforcement Learning Without Great Exploration
https://papers.nips.cc/paper_files/paper/2020/hash/0dc23b6a0e4abc39904388dd3ffadcd1-Abstract.html
Yao Liu, Adith Swaminathan, Alekh Agarwal, Emma Brunskill
https://papers.nips.cc/paper_files/paper/2020/hash/0dc23b6a0e4abc39904388dd3ffadcd1-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/0dc23b6a0e4abc39904388dd3ffadcd1-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9831-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/0dc23b6a0e4abc39904388dd3ffadcd1-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/0dc23b6a0e4abc39904388dd3ffadcd1-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/0dc23b6a0e4abc39904388dd3ffadcd1-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/0dc23b6a0e4abc39904388dd3ffadcd1-Supplemental.pdf
Batch reinforcement learning (RL) is important to apply RL algorithms to many high stakes tasks. Doing batch RL in a way that yields a reliable new policy in large domains is challenging: a new decision policy may visit states and actions outside the support of the batch data, and function approximation and optimizatio...
Detection as Regression: Certified Object Detection with Median Smoothing
https://papers.nips.cc/paper_files/paper/2020/hash/0dd1bc593a91620daecf7723d2235624-Abstract.html
Ping-yeh Chiang, Michael Curry, Ahmed Abdelkader, Aounon Kumar, John Dickerson, Tom Goldstein
https://papers.nips.cc/paper_files/paper/2020/hash/0dd1bc593a91620daecf7723d2235624-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/0dd1bc593a91620daecf7723d2235624-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9832-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/0dd1bc593a91620daecf7723d2235624-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/0dd1bc593a91620daecf7723d2235624-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/0dd1bc593a91620daecf7723d2235624-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/0dd1bc593a91620daecf7723d2235624-Supplemental.pdf
Despite the vulnerability of object detectors to adversarial attacks, very few defenses are known to date. While adversarial training can improve the empirical robustness of image classifiers, a direct extension to object detection is very expensive. This work is motivated by recent progress on certified classification...
Contextual Reserve Price Optimization in Auctions via Mixed Integer Programming
https://papers.nips.cc/paper_files/paper/2020/hash/0e1bacf07b14673fcdb553da51b999a5-Abstract.html
Joey Huchette, Haihao Lu, Hossein Esfandiari, Vahab Mirrokni
https://papers.nips.cc/paper_files/paper/2020/hash/0e1bacf07b14673fcdb553da51b999a5-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/0e1bacf07b14673fcdb553da51b999a5-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9833-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/0e1bacf07b14673fcdb553da51b999a5-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/0e1bacf07b14673fcdb553da51b999a5-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/0e1bacf07b14673fcdb553da51b999a5-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/0e1bacf07b14673fcdb553da51b999a5-Supplemental.pdf
We study the problem of learning a linear model to set the reserve price in an auction, given contextual information, in order to maximize expected revenue from the seller side. First, we show that it is not possible to solve this problem in polynomial time unless the Exponential Time Hypothesis fails. Second, we prese...
ExpandNets: Linear Over-parameterization to Train Compact Convolutional Networks
https://papers.nips.cc/paper_files/paper/2020/hash/0e1ebad68af7f0ae4830b7ac92bc3c6f-Abstract.html
Shuxuan Guo, Jose M. Alvarez, Mathieu Salzmann
https://papers.nips.cc/paper_files/paper/2020/hash/0e1ebad68af7f0ae4830b7ac92bc3c6f-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/0e1ebad68af7f0ae4830b7ac92bc3c6f-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9834-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/0e1ebad68af7f0ae4830b7ac92bc3c6f-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/0e1ebad68af7f0ae4830b7ac92bc3c6f-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/0e1ebad68af7f0ae4830b7ac92bc3c6f-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/0e1ebad68af7f0ae4830b7ac92bc3c6f-Supplemental.pdf
We introduce an approach to training a given compact network. To this end, we leverage over-parameterization, which typically improves both neural network optimization and generalization. Specifically, we propose to expand each linear layer of the compact network into multiple consecutive linear layers, without adding ...
FleXOR: Trainable Fractional Quantization
https://papers.nips.cc/paper_files/paper/2020/hash/0e230b1a582d76526b7ad7fc62ae937d-Abstract.html
Dongsoo Lee, Se Jung Kwon, Byeongwook Kim, Yongkweon Jeon, Baeseong Park, Jeongin Yun
https://papers.nips.cc/paper_files/paper/2020/hash/0e230b1a582d76526b7ad7fc62ae937d-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/0e230b1a582d76526b7ad7fc62ae937d-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9835-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/0e230b1a582d76526b7ad7fc62ae937d-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/0e230b1a582d76526b7ad7fc62ae937d-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/0e230b1a582d76526b7ad7fc62ae937d-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/0e230b1a582d76526b7ad7fc62ae937d-Supplemental.pdf
Quantization based on the binary codes is gaining attention because each quantized bit can be directly utilized for computations without dequantization using look-up tables. Previous attempts, however, only allow for integer numbers of quantization bits, which ends up restricting the search space for compression ratio ...
The Implications of Local Correlation on Learning Some Deep Functions
https://papers.nips.cc/paper_files/paper/2020/hash/0e4ceef65add6cf21c0f3f9da53b71c0-Abstract.html
Eran Malach, Shai Shalev-Shwartz
https://papers.nips.cc/paper_files/paper/2020/hash/0e4ceef65add6cf21c0f3f9da53b71c0-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/0e4ceef65add6cf21c0f3f9da53b71c0-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9836-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/0e4ceef65add6cf21c0f3f9da53b71c0-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/0e4ceef65add6cf21c0f3f9da53b71c0-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/0e4ceef65add6cf21c0f3f9da53b71c0-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/0e4ceef65add6cf21c0f3f9da53b71c0-Supplemental.pdf
It is known that learning deep neural-networks is computationally hard in the worst-case. In fact, the proofs of such hardness results show that even weakly learning deep networks is hard. In other words, no efficient algorithm can find a predictor that is slightly better than a random guess. However, we observe that o...
Learning to search efficiently for causally near-optimal treatments
https://papers.nips.cc/paper_files/paper/2020/hash/0e900ad84f63618452210ab8baae0218-Abstract.html
Samuel Håkansson, Viktor Lindblom, Omer Gottesman, Fredrik D. Johansson
https://papers.nips.cc/paper_files/paper/2020/hash/0e900ad84f63618452210ab8baae0218-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/0e900ad84f63618452210ab8baae0218-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9837-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/0e900ad84f63618452210ab8baae0218-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/0e900ad84f63618452210ab8baae0218-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/0e900ad84f63618452210ab8baae0218-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/0e900ad84f63618452210ab8baae0218-Supplemental.pdf
Finding an effective medical treatment often requires a search by trial and error. Making this search more efficient by minimizing the number of unnecessary trials could lower both costs and patient suffering. We formalize this problem as learning a policy for finding a near-optimal treatment in a minimum number of tri...
A Game Theoretic Analysis of Additive Adversarial Attacks and Defenses
https://papers.nips.cc/paper_files/paper/2020/hash/0ea6f098a59fcf2462afc50d130ff034-Abstract.html
Ambar Pal, Rene Vidal
https://papers.nips.cc/paper_files/paper/2020/hash/0ea6f098a59fcf2462afc50d130ff034-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/0ea6f098a59fcf2462afc50d130ff034-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9838-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/0ea6f098a59fcf2462afc50d130ff034-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/0ea6f098a59fcf2462afc50d130ff034-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/0ea6f098a59fcf2462afc50d130ff034-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/0ea6f098a59fcf2462afc50d130ff034-Supplemental.pdf
Research in adversarial learning follows a cat and mouse game between attackers and defenders where attacks are proposed, they are mitigated by new defenses, and subsequently new attacks are proposed that break earlier defenses, and so on. However, it has remained unclear as to whether there are conditions under which ...
Posterior Network: Uncertainty Estimation without OOD Samples via Density-Based Pseudo-Counts
https://papers.nips.cc/paper_files/paper/2020/hash/0eac690d7059a8de4b48e90f14510391-Abstract.html
Bertrand Charpentier, Daniel Zügner, Stephan Günnemann
https://papers.nips.cc/paper_files/paper/2020/hash/0eac690d7059a8de4b48e90f14510391-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/0eac690d7059a8de4b48e90f14510391-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9839-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/0eac690d7059a8de4b48e90f14510391-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/0eac690d7059a8de4b48e90f14510391-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/0eac690d7059a8de4b48e90f14510391-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/0eac690d7059a8de4b48e90f14510391-Supplemental.pdf
In this work we propose the Posterior Network (PostNet), which uses Normalizing Flows to predict an individual closed-form posterior distribution over predicted probabilites for any input sample. The posterior distributions learned by PostNet accurately reflect uncertainty for in- and out-of-distribution data -- withou...
Recurrent Quantum Neural Networks
https://papers.nips.cc/paper_files/paper/2020/hash/0ec96be397dd6d3cf2fecb4a2d627c1c-Abstract.html
Johannes Bausch
https://papers.nips.cc/paper_files/paper/2020/hash/0ec96be397dd6d3cf2fecb4a2d627c1c-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/0ec96be397dd6d3cf2fecb4a2d627c1c-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9840-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/0ec96be397dd6d3cf2fecb4a2d627c1c-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/0ec96be397dd6d3cf2fecb4a2d627c1c-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/0ec96be397dd6d3cf2fecb4a2d627c1c-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/0ec96be397dd6d3cf2fecb4a2d627c1c-Supplemental.zip
Recurrent neural networks are the foundation of many sequence-to-sequence models in machine learning, such as machine translation and speech synthesis. With applied quantum computing in its infancy, there already exist quantum machine learning models such as variational quantum eigensolvers which have been used e.g. in...
No-Regret Learning and Mixed Nash Equilibria: They Do Not Mix
https://papers.nips.cc/paper_files/paper/2020/hash/0ed9422357395a0d4879191c66f4faa2-Abstract.html
Emmanouil-Vasileios Vlatakis-Gkaragkounis, Lampros Flokas, Thanasis Lianeas, Panayotis Mertikopoulos, Georgios Piliouras
https://papers.nips.cc/paper_files/paper/2020/hash/0ed9422357395a0d4879191c66f4faa2-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/0ed9422357395a0d4879191c66f4faa2-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9841-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/0ed9422357395a0d4879191c66f4faa2-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/0ed9422357395a0d4879191c66f4faa2-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/0ed9422357395a0d4879191c66f4faa2-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/0ed9422357395a0d4879191c66f4faa2-Supplemental.pdf
Understanding the behavior of no-regret dynamics in general N-player games is a fundamental question in online learning and game theory. A folk result in the field states that, in finite games, the empirical frequency of play under no-regret learning converges to the game’s set of coarse correlated equilibria. By contr...
A Unifying View of Optimism in Episodic Reinforcement Learning
https://papers.nips.cc/paper_files/paper/2020/hash/0f0e13216262f4a201bec128044dd30f-Abstract.html
Gergely Neu, Ciara Pike-Burke
https://papers.nips.cc/paper_files/paper/2020/hash/0f0e13216262f4a201bec128044dd30f-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/0f0e13216262f4a201bec128044dd30f-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9842-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/0f0e13216262f4a201bec128044dd30f-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/0f0e13216262f4a201bec128044dd30f-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/0f0e13216262f4a201bec128044dd30f-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/0f0e13216262f4a201bec128044dd30f-Supplemental.pdf
The principle of ``optimism in the face of uncertainty'' underpins many theoretically successful reinforcement learning algorithms. In this paper we provide a general framework for designing, analyzing and implementing such algorithms in the episodic reinforcement learning problem. This framework is built upon Lagrangi...
Continuous Submodular Maximization: Beyond DR-Submodularity
https://papers.nips.cc/paper_files/paper/2020/hash/0f34132b15dd02f282a11ea1e322a96d-Abstract.html
Moran Feldman, Amin Karbasi
https://papers.nips.cc/paper_files/paper/2020/hash/0f34132b15dd02f282a11ea1e322a96d-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/0f34132b15dd02f282a11ea1e322a96d-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9843-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/0f34132b15dd02f282a11ea1e322a96d-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/0f34132b15dd02f282a11ea1e322a96d-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/0f34132b15dd02f282a11ea1e322a96d-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/0f34132b15dd02f282a11ea1e322a96d-Supplemental.pdf
In this paper, we propose the first continuous optimization algorithms that achieve a constant factor approximation guarantee for the problem of monotone continuous submodular maximization subject to a linear constraint. We first prove that a simple variant of the vanilla coordinate ascent, called \COORDINATE-ASCENT+, ...
An Asymptotically Optimal Primal-Dual Incremental Algorithm for Contextual Linear Bandits
https://papers.nips.cc/paper_files/paper/2020/hash/0f34314d2dd0c1b9311cb8f40eb4f255-Abstract.html
Andrea Tirinzoni, Matteo Pirotta, Marcello Restelli, Alessandro Lazaric
https://papers.nips.cc/paper_files/paper/2020/hash/0f34314d2dd0c1b9311cb8f40eb4f255-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/0f34314d2dd0c1b9311cb8f40eb4f255-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9844-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/0f34314d2dd0c1b9311cb8f40eb4f255-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/0f34314d2dd0c1b9311cb8f40eb4f255-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/0f34314d2dd0c1b9311cb8f40eb4f255-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/0f34314d2dd0c1b9311cb8f40eb4f255-Supplemental.pdf
In the contextual linear bandit setting, algorithms built on the optimism principle fail to exploit the structure of the problem and have been shown to be asymptotically suboptimal. In this paper, we follow recent approaches of deriving asymptotically optimal algorithms from problem-dependent regret lower bounds and we...
Assessing SATNet's Ability to Solve the Symbol Grounding Problem
https://papers.nips.cc/paper_files/paper/2020/hash/0ff8033cf9437c213ee13937b1c4c455-Abstract.html
Oscar Chang, Lampros Flokas, Hod Lipson, Michael Spranger
https://papers.nips.cc/paper_files/paper/2020/hash/0ff8033cf9437c213ee13937b1c4c455-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/0ff8033cf9437c213ee13937b1c4c455-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9845-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/0ff8033cf9437c213ee13937b1c4c455-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/0ff8033cf9437c213ee13937b1c4c455-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/0ff8033cf9437c213ee13937b1c4c455-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/0ff8033cf9437c213ee13937b1c4c455-Supplemental.zip
SATNet is an award-winning MAXSAT solver that can be used to infer logical rules and integrated as a differentiable layer in a deep neural network. It had been shown to solve Sudoku puzzles visually from examples of puzzle digit images, and was heralded as an impressive achievement towards the longstanding AI goal of c...
A Bayesian Nonparametrics View into Deep Representations
https://papers.nips.cc/paper_files/paper/2020/hash/0ffaca95e3e5242ba1097ad8a9a6e95d-Abstract.html
Michał Jamroż, Marcin Kurdziel, Mateusz Opala
https://papers.nips.cc/paper_files/paper/2020/hash/0ffaca95e3e5242ba1097ad8a9a6e95d-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/0ffaca95e3e5242ba1097ad8a9a6e95d-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9846-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/0ffaca95e3e5242ba1097ad8a9a6e95d-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/0ffaca95e3e5242ba1097ad8a9a6e95d-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/0ffaca95e3e5242ba1097ad8a9a6e95d-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/0ffaca95e3e5242ba1097ad8a9a6e95d-Supplemental.pdf
We investigate neural network representations from a probabilistic perspective. Specifically, we leverage Bayesian nonparametrics to construct models of neural activations in Convolutional Neural Networks (CNNs) and latent representations in Variational Autoencoders (VAEs). This allows us to formulate a tractable compl...
On the Similarity between the Laplace and Neural Tangent Kernels
https://papers.nips.cc/paper_files/paper/2020/hash/1006ff12c465532f8c574aeaa4461b16-Abstract.html
Amnon Geifman, Abhay Yadav, Yoni Kasten, Meirav Galun, David Jacobs, Basri Ronen
https://papers.nips.cc/paper_files/paper/2020/hash/1006ff12c465532f8c574aeaa4461b16-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1006ff12c465532f8c574aeaa4461b16-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9847-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1006ff12c465532f8c574aeaa4461b16-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1006ff12c465532f8c574aeaa4461b16-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1006ff12c465532f8c574aeaa4461b16-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1006ff12c465532f8c574aeaa4461b16-Supplemental.pdf
Recent theoretical work has shown that massively overparameterized neural networks are equivalent to kernel regressors that use Neural Tangent Kernels (NTKs). Experiments show that these kernel methods perform similarly to real neural networks. Here we show that NTK for fully connected networks with ReLU activation i...
A causal view of compositional zero-shot recognition
https://papers.nips.cc/paper_files/paper/2020/hash/1010cedf85f6a7e24b087e63235dc12e-Abstract.html
Yuval Atzmon, Felix Kreuk, Uri Shalit, Gal Chechik
https://papers.nips.cc/paper_files/paper/2020/hash/1010cedf85f6a7e24b087e63235dc12e-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1010cedf85f6a7e24b087e63235dc12e-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9848-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1010cedf85f6a7e24b087e63235dc12e-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1010cedf85f6a7e24b087e63235dc12e-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1010cedf85f6a7e24b087e63235dc12e-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1010cedf85f6a7e24b087e63235dc12e-Supplemental.pdf
Here we describe an approach for compositional generalization that builds on causal ideas. First, we describe compositional zero-shot learning from a causal perspective, and propose to view zero-shot inference as finding "which intervention caused the image?". Second, we present a causal-inspired embedding model that l...
HiPPO: Recurrent Memory with Optimal Polynomial Projections
https://papers.nips.cc/paper_files/paper/2020/hash/102f0bb6efb3a6128a3c750dd16729be-Abstract.html
Albert Gu, Tri Dao, Stefano Ermon, Atri Rudra, Christopher Ré
https://papers.nips.cc/paper_files/paper/2020/hash/102f0bb6efb3a6128a3c750dd16729be-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/102f0bb6efb3a6128a3c750dd16729be-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9849-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/102f0bb6efb3a6128a3c750dd16729be-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/102f0bb6efb3a6128a3c750dd16729be-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/102f0bb6efb3a6128a3c750dd16729be-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/102f0bb6efb3a6128a3c750dd16729be-Supplemental.pdf
A central problem in learning from sequential data is representing cumulative history in an incremental fashion as more data is processed. We introduce a general framework (HiPPO) for the online compression of continuous signals and discrete time series by projection onto polynomial bases. Given a measure that specifie...
Auto Learning Attention
https://papers.nips.cc/paper_files/paper/2020/hash/103303dd56a731e377d01f6a37badae3-Abstract.html
Benteng Ma, Jing Zhang, Yong Xia, Dacheng Tao
https://papers.nips.cc/paper_files/paper/2020/hash/103303dd56a731e377d01f6a37badae3-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/103303dd56a731e377d01f6a37badae3-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9850-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/103303dd56a731e377d01f6a37badae3-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/103303dd56a731e377d01f6a37badae3-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/103303dd56a731e377d01f6a37badae3-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/103303dd56a731e377d01f6a37badae3-Supplemental.pdf
Attention modules have been demonstrated effective in strengthening the representation ability of a neural network via reweighting spatial or channel features or stacking both operations sequentially. However, designing the structures of different attention operations requires a bulk of computation and extensive expert...
CASTLE: Regularization via Auxiliary Causal Graph Discovery
https://papers.nips.cc/paper_files/paper/2020/hash/1068bceb19323fe72b2b344ccf85c254-Abstract.html
Trent Kyono, Yao Zhang, Mihaela van der Schaar
https://papers.nips.cc/paper_files/paper/2020/hash/1068bceb19323fe72b2b344ccf85c254-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1068bceb19323fe72b2b344ccf85c254-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9851-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1068bceb19323fe72b2b344ccf85c254-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1068bceb19323fe72b2b344ccf85c254-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1068bceb19323fe72b2b344ccf85c254-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1068bceb19323fe72b2b344ccf85c254-Supplemental.zip
Regularization improves generalization of supervised models to out-of-sample data. Prior works have shown that prediction in the causal direction (effect from cause) results in lower testing error than the anti-causal direction. However, existing regularization methods are agnostic of causality. We introduce Causal Str...
Long-Tailed Classification by Keeping the Good and Removing the Bad Momentum Causal Effect
https://papers.nips.cc/paper_files/paper/2020/hash/1091660f3dff84fd648efe31391c5524-Abstract.html
Kaihua Tang, Jianqiang Huang, Hanwang Zhang
https://papers.nips.cc/paper_files/paper/2020/hash/1091660f3dff84fd648efe31391c5524-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1091660f3dff84fd648efe31391c5524-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9852-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1091660f3dff84fd648efe31391c5524-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1091660f3dff84fd648efe31391c5524-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1091660f3dff84fd648efe31391c5524-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1091660f3dff84fd648efe31391c5524-Supplemental.pdf
As the class size grows, maintaining a balanced dataset across many classes is challenging because the data are long-tailed in nature; it is even impossible when the sample-of-interest co-exists with each other in one collectable unit, e.g., multiple visual instances in one image. Therefore, long-tailed classification ...
Explainable Voting
https://papers.nips.cc/paper_files/paper/2020/hash/10c72a9d42dd07a028ee910f7854da5d-Abstract.html
Dominik Peters, Ariel D. Procaccia, Alexandros Psomas, Zixin Zhou
https://papers.nips.cc/paper_files/paper/2020/hash/10c72a9d42dd07a028ee910f7854da5d-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/10c72a9d42dd07a028ee910f7854da5d-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9853-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/10c72a9d42dd07a028ee910f7854da5d-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/10c72a9d42dd07a028ee910f7854da5d-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/10c72a9d42dd07a028ee910f7854da5d-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/10c72a9d42dd07a028ee910f7854da5d-Supplemental.pdf
The design of voting rules is traditionally guided by desirable axioms. Recent work shows that, surprisingly, the axiomatic approach can also support the generation of explanations for voting outcomes. However, no bounds on the size of these explanations is given; for all we know, they may be unbearably tedious. We pro...
Deep Archimedean Copulas
https://papers.nips.cc/paper_files/paper/2020/hash/10eb6500bd1e4a3704818012a1593cc3-Abstract.html
Chun Kai Ling, Fei Fang, J. Zico Kolter
https://papers.nips.cc/paper_files/paper/2020/hash/10eb6500bd1e4a3704818012a1593cc3-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/10eb6500bd1e4a3704818012a1593cc3-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9854-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/10eb6500bd1e4a3704818012a1593cc3-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/10eb6500bd1e4a3704818012a1593cc3-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/10eb6500bd1e4a3704818012a1593cc3-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/10eb6500bd1e4a3704818012a1593cc3-Supplemental.pdf
A central problem in machine learning and statistics is to model joint densities of random variables from data. Copulas are joint cumulative distribution functions with uniform marginal distributions and are used to capture interdependencies in isolation from marginals. Copulas are widely used within statistics, but ha...
Re-Examining Linear Embeddings for High-Dimensional Bayesian Optimization
https://papers.nips.cc/paper_files/paper/2020/hash/10fb6cfa4c990d2bad5ddef4f70e8ba2-Abstract.html
Ben Letham, Roberto Calandra, Akshara Rai, Eytan Bakshy
https://papers.nips.cc/paper_files/paper/2020/hash/10fb6cfa4c990d2bad5ddef4f70e8ba2-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/10fb6cfa4c990d2bad5ddef4f70e8ba2-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9855-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/10fb6cfa4c990d2bad5ddef4f70e8ba2-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/10fb6cfa4c990d2bad5ddef4f70e8ba2-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/10fb6cfa4c990d2bad5ddef4f70e8ba2-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/10fb6cfa4c990d2bad5ddef4f70e8ba2-Supplemental.pdf
Bayesian optimization (BO) is a popular approach to optimize expensive-to-evaluate black-box functions. A significant challenge in BO is to scale to high-dimensional parameter spaces while retaining sample efficiency. A solution considered in existing literature is to embed the high-dimensional space in a lower-dimensi...
UnModNet: Learning to Unwrap a Modulo Image for High Dynamic Range Imaging
https://papers.nips.cc/paper_files/paper/2020/hash/1102a326d5f7c9e04fc3c89d0ede88c9-Abstract.html
Chu Zhou, Hang Zhao, Jin Han, Chang Xu, Chao Xu, Tiejun Huang, Boxin Shi
https://papers.nips.cc/paper_files/paper/2020/hash/1102a326d5f7c9e04fc3c89d0ede88c9-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1102a326d5f7c9e04fc3c89d0ede88c9-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9856-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1102a326d5f7c9e04fc3c89d0ede88c9-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1102a326d5f7c9e04fc3c89d0ede88c9-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1102a326d5f7c9e04fc3c89d0ede88c9-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1102a326d5f7c9e04fc3c89d0ede88c9-Supplemental.pdf
A conventional camera often suffers from over- or under-exposure when recording a real-world scene with a very high dynamic range (HDR). In contrast, a modulo camera with a Markov random field (MRF) based unwrapping algorithm can theoretically accomplish unbounded dynamic range but shows degenerate performances when th...
Thunder: a Fast Coordinate Selection Solver for Sparse Learning
https://papers.nips.cc/paper_files/paper/2020/hash/11348e03e23b137d55d94464250a67a2-Abstract.html
Shaogang Ren, Weijie Zhao, Ping Li
https://papers.nips.cc/paper_files/paper/2020/hash/11348e03e23b137d55d94464250a67a2-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/11348e03e23b137d55d94464250a67a2-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9857-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/11348e03e23b137d55d94464250a67a2-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/11348e03e23b137d55d94464250a67a2-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/11348e03e23b137d55d94464250a67a2-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/11348e03e23b137d55d94464250a67a2-Supplemental.pdf
L1 regularization has been broadly employed to pursue model sparsity. Despite the non-smoothness, people have developed efficient algorithms by leveraging the sparsity and convexity of the problems. In this paper, we propose a novel active incremental approach to further improve the efficiency of the solvers. We show t...
Neural Networks Fail to Learn Periodic Functions and How to Fix It
https://papers.nips.cc/paper_files/paper/2020/hash/1160453108d3e537255e9f7b931f4e90-Abstract.html
Liu Ziyin, Tilman Hartwig, Masahito Ueda
https://papers.nips.cc/paper_files/paper/2020/hash/1160453108d3e537255e9f7b931f4e90-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1160453108d3e537255e9f7b931f4e90-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9858-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1160453108d3e537255e9f7b931f4e90-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1160453108d3e537255e9f7b931f4e90-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1160453108d3e537255e9f7b931f4e90-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1160453108d3e537255e9f7b931f4e90-Supplemental.pdf
Previous literature offers limited clues on how to learn a periodic function using modern neural networks. We start with a study of the extrapolation properties of neural networks; we prove and demonstrate experimentally that the standard activations functions, such as ReLU, tanh, sigmoid, along with their variants, al...
Distribution Matching for Crowd Counting
https://papers.nips.cc/paper_files/paper/2020/hash/118bd558033a1016fcc82560c65cca5f-Abstract.html
Boyu Wang, Huidong Liu, Dimitris Samaras, Minh Hoai Nguyen
https://papers.nips.cc/paper_files/paper/2020/hash/118bd558033a1016fcc82560c65cca5f-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/118bd558033a1016fcc82560c65cca5f-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9859-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/118bd558033a1016fcc82560c65cca5f-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/118bd558033a1016fcc82560c65cca5f-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/118bd558033a1016fcc82560c65cca5f-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/118bd558033a1016fcc82560c65cca5f-Supplemental.pdf
In crowd counting, each training image contains multiple people, where each person is annotated by a dot. Existing crowd counting methods need to use a Gaussian to smooth each annotated dot or to estimate the likelihood of every pixel given the annotated point. In this paper, we show that imposing Gaussians to annotati...
Correspondence learning via linearly-invariant embedding
https://papers.nips.cc/paper_files/paper/2020/hash/11953163dd7fb12669b41a48f78a29b6-Abstract.html
Riccardo Marin, Marie-Julie Rakotosaona, Simone Melzi, Maks Ovsjanikov
https://papers.nips.cc/paper_files/paper/2020/hash/11953163dd7fb12669b41a48f78a29b6-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/11953163dd7fb12669b41a48f78a29b6-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9860-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/11953163dd7fb12669b41a48f78a29b6-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/11953163dd7fb12669b41a48f78a29b6-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/11953163dd7fb12669b41a48f78a29b6-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/11953163dd7fb12669b41a48f78a29b6-Supplemental.pdf
In this paper, we propose a fully differentiable pipeline for estimating accurate dense correspondences between 3D point clouds. The proposed pipeline is an extension and a generalization of the functional maps framework. However, instead of using the Laplace-Beltrami eigenfunctions as done in virtually all previous wo...
Learning to Dispatch for Job Shop Scheduling via Deep Reinforcement Learning
https://papers.nips.cc/paper_files/paper/2020/hash/11958dfee29b6709f48a9ba0387a2431-Abstract.html
Cong Zhang, Wen Song, Zhiguang Cao, Jie Zhang, Puay Siew Tan, Xu Chi
https://papers.nips.cc/paper_files/paper/2020/hash/11958dfee29b6709f48a9ba0387a2431-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/11958dfee29b6709f48a9ba0387a2431-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9861-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/11958dfee29b6709f48a9ba0387a2431-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/11958dfee29b6709f48a9ba0387a2431-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/11958dfee29b6709f48a9ba0387a2431-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/11958dfee29b6709f48a9ba0387a2431-Supplemental.pdf
Priority dispatching rule (PDR) is widely used for solving real-world Job-shop scheduling problem (JSSP). However, the design of effective PDRs is a tedious task, requiring a myriad of specialized knowledge and often delivering limited performance. In this paper, we propose to automatically learn PDRs via an end-to-end...
On Adaptive Attacks to Adversarial Example Defenses
https://papers.nips.cc/paper_files/paper/2020/hash/11f38f8ecd71867b42433548d1078e38-Abstract.html
Florian Tramer, Nicholas Carlini, Wieland Brendel, Aleksander Madry
https://papers.nips.cc/paper_files/paper/2020/hash/11f38f8ecd71867b42433548d1078e38-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/11f38f8ecd71867b42433548d1078e38-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9862-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/11f38f8ecd71867b42433548d1078e38-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/11f38f8ecd71867b42433548d1078e38-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/11f38f8ecd71867b42433548d1078e38-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/11f38f8ecd71867b42433548d1078e38-Supplemental.zip
While prior evaluation papers focused mainly on the end result---showing that a defense was ineffective---this paper focuses on laying out the methodology and the approach necessary to perform an adaptive attack. Some of our attack strategies are generalizable, but no single strategy would have been sufficient for all ...
Sinkhorn Natural Gradient for Generative Models
https://papers.nips.cc/paper_files/paper/2020/hash/122e27d57ae8ecb37f3f1da67abb33cb-Abstract.html
Zebang Shen, Zhenfu Wang, Alejandro Ribeiro, Hamed Hassani
https://papers.nips.cc/paper_files/paper/2020/hash/122e27d57ae8ecb37f3f1da67abb33cb-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/122e27d57ae8ecb37f3f1da67abb33cb-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9863-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/122e27d57ae8ecb37f3f1da67abb33cb-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/122e27d57ae8ecb37f3f1da67abb33cb-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/122e27d57ae8ecb37f3f1da67abb33cb-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/122e27d57ae8ecb37f3f1da67abb33cb-Supplemental.pdf
We consider the problem of minimizing a functional over a parametric family of probability measures, where the parameterization is characterized via a push-forward structure. An important application of this problem is in training generative adversarial networks. In this regard, we propose a novel Sinkhorn Natural G...
Online Sinkhorn: Optimal Transport distances from sample streams
https://papers.nips.cc/paper_files/paper/2020/hash/123650dd0560587918b3d771cf0c0171-Abstract.html
Arthur Mensch, Gabriel Peyré
https://papers.nips.cc/paper_files/paper/2020/hash/123650dd0560587918b3d771cf0c0171-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/123650dd0560587918b3d771cf0c0171-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9864-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/123650dd0560587918b3d771cf0c0171-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/123650dd0560587918b3d771cf0c0171-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/123650dd0560587918b3d771cf0c0171-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/123650dd0560587918b3d771cf0c0171-Supplemental.pdf
Optimal Transport (OT) distances are now routinely used as loss functions in ML tasks. Yet, computing OT distances between arbitrary (i.e. not necessarily discrete) probability distributions remains an open problem. This paper introduces a new online estimator of entropy-regularized OT distances between two such arbitr...
Ultrahyperbolic Representation Learning
https://papers.nips.cc/paper_files/paper/2020/hash/123b7f02433572a0a560e620311a469c-Abstract.html
Marc Law, Jos Stam
https://papers.nips.cc/paper_files/paper/2020/hash/123b7f02433572a0a560e620311a469c-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/123b7f02433572a0a560e620311a469c-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9865-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/123b7f02433572a0a560e620311a469c-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/123b7f02433572a0a560e620311a469c-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/123b7f02433572a0a560e620311a469c-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/123b7f02433572a0a560e620311a469c-Supplemental.pdf
In machine learning, data is usually represented in a (flat) Euclidean space where distances between points are along straight lines. Researchers have recently considered more exotic (non-Euclidean) Riemannian manifolds such as hyperbolic space which is well suited for tree-like data. In this paper, we propose a repres...
Locally-Adaptive Nonparametric Online Learning
https://papers.nips.cc/paper_files/paper/2020/hash/12780ea688a71dabc284b064add459a4-Abstract.html
Ilja Kuzborskij, Nicolò Cesa-Bianchi
https://papers.nips.cc/paper_files/paper/2020/hash/12780ea688a71dabc284b064add459a4-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/12780ea688a71dabc284b064add459a4-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9866-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/12780ea688a71dabc284b064add459a4-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/12780ea688a71dabc284b064add459a4-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/12780ea688a71dabc284b064add459a4-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/12780ea688a71dabc284b064add459a4-Supplemental.pdf
One of the main strengths of online algorithms is their ability to adapt to arbitrary data sequences. This is especially important in nonparametric settings, where performance is measured against rich classes of comparator functions that are able to fit complex environments. Although such hard comparators and complex e...
Compositional Generalization via Neural-Symbolic Stack Machines
https://papers.nips.cc/paper_files/paper/2020/hash/12b1e42dc0746f22cf361267de07073f-Abstract.html
Xinyun Chen, Chen Liang, Adams Wei Yu, Dawn Song, Denny Zhou
https://papers.nips.cc/paper_files/paper/2020/hash/12b1e42dc0746f22cf361267de07073f-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/12b1e42dc0746f22cf361267de07073f-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9867-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/12b1e42dc0746f22cf361267de07073f-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/12b1e42dc0746f22cf361267de07073f-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/12b1e42dc0746f22cf361267de07073f-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/12b1e42dc0746f22cf361267de07073f-Supplemental.pdf
Despite achieving tremendous success, existing deep learning models have exposed limitations in compositional generalization, the capability to learn compositional rules and apply them to unseen cases in a systematic manner. To tackle this issue, we propose the Neural-Symbolic Stack Machine (NeSS). It contains a neural...
Graphon Neural Networks and the Transferability of Graph Neural Networks
https://papers.nips.cc/paper_files/paper/2020/hash/12bcd658ef0a540cabc36cdf2b1046fd-Abstract.html
Luana Ruiz, Luiz Chamon, Alejandro Ribeiro
https://papers.nips.cc/paper_files/paper/2020/hash/12bcd658ef0a540cabc36cdf2b1046fd-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/12bcd658ef0a540cabc36cdf2b1046fd-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9868-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/12bcd658ef0a540cabc36cdf2b1046fd-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/12bcd658ef0a540cabc36cdf2b1046fd-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/12bcd658ef0a540cabc36cdf2b1046fd-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/12bcd658ef0a540cabc36cdf2b1046fd-Supplemental.pdf
Graph neural networks (GNNs) rely on graph convolutions to extract local features from network data. These graph convolutions combine information from adjacent nodes using coefficients that are shared across all nodes. Since these coefficients are shared and do not depend on the graph, one can envision using the same c...
Unreasonable Effectiveness of Greedy Algorithms in Multi-Armed Bandit with Many Arms
https://papers.nips.cc/paper_files/paper/2020/hash/12d16adf4a9355513f9d574b76087a08-Abstract.html
Mohsen Bayati, Nima Hamidi, Ramesh Johari, Khashayar Khosravi
https://papers.nips.cc/paper_files/paper/2020/hash/12d16adf4a9355513f9d574b76087a08-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/12d16adf4a9355513f9d574b76087a08-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9869-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/12d16adf4a9355513f9d574b76087a08-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/12d16adf4a9355513f9d574b76087a08-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/12d16adf4a9355513f9d574b76087a08-Review.html
null
We study the structure of regret-minimizing policies in the {\em many-armed} Bayesian multi-armed bandit problem: in particular, with $k$ the number of arms and $T$ the time horizon, we consider the case where $k \geq \sqrt{T}$. We first show that {\em subsampling} is a critical step for designing optimal policies. In...
Gamma-Models: Generative Temporal Difference Learning for Infinite-Horizon Prediction
https://papers.nips.cc/paper_files/paper/2020/hash/12ffb0968f2f56e51a59a6beb37b2859-Abstract.html
Michael Janner, Igor Mordatch, Sergey Levine
https://papers.nips.cc/paper_files/paper/2020/hash/12ffb0968f2f56e51a59a6beb37b2859-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/12ffb0968f2f56e51a59a6beb37b2859-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9870-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/12ffb0968f2f56e51a59a6beb37b2859-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/12ffb0968f2f56e51a59a6beb37b2859-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/12ffb0968f2f56e51a59a6beb37b2859-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/12ffb0968f2f56e51a59a6beb37b2859-Supplemental.pdf
We introduce the gamma-model, a predictive model of environment dynamics with an infinite, probabilistic horizon. Replacing standard single-step models with gamma-models leads to generalizations of the procedures that form the foundation of model-based control, including the model rollout and model-based value estimati...
Deep Transformers with Latent Depth
https://papers.nips.cc/paper_files/paper/2020/hash/1325cdae3b6f0f91a1b629307bf2d498-Abstract.html
Xian Li, Asa Cooper Stickland, Yuqing Tang, Xiang Kong
https://papers.nips.cc/paper_files/paper/2020/hash/1325cdae3b6f0f91a1b629307bf2d498-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1325cdae3b6f0f91a1b629307bf2d498-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9871-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1325cdae3b6f0f91a1b629307bf2d498-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1325cdae3b6f0f91a1b629307bf2d498-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1325cdae3b6f0f91a1b629307bf2d498-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1325cdae3b6f0f91a1b629307bf2d498-Supplemental.pdf
The Transformer model has achieved state-of-the-art performance in many sequence modeling tasks. However, how to leverage model capacity with large or variable depths is still an open challenge. We present a probabilistic framework to automatically learn which layer(s) to use by learning the posterior distributions of ...
Neural Mesh Flow: 3D Manifold Mesh Generation via Diffeomorphic Flows
https://papers.nips.cc/paper_files/paper/2020/hash/1349b36b01e0e804a6c2909a6d0ec72a-Abstract.html
Kunal Gupta, Manmohan Chandraker
https://papers.nips.cc/paper_files/paper/2020/hash/1349b36b01e0e804a6c2909a6d0ec72a-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1349b36b01e0e804a6c2909a6d0ec72a-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9872-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1349b36b01e0e804a6c2909a6d0ec72a-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1349b36b01e0e804a6c2909a6d0ec72a-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1349b36b01e0e804a6c2909a6d0ec72a-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1349b36b01e0e804a6c2909a6d0ec72a-Supplemental.zip
Meshes are important representations of physical 3D entities in the virtual world. Applications like rendering, simulations and 3D printing require meshes to be manifold so that they can interact with the world like the real objects they represent. Prior methods generate meshes with great geometric accuracy but poor m...
Statistical control for spatio-temporal MEG/EEG source imaging with desparsified mutli-task Lasso
https://papers.nips.cc/paper_files/paper/2020/hash/1359aa933b48b754a2f54adb688bfa77-Abstract.html
Jerome-Alexis Chevalier, Joseph Salmon, Alexandre Gramfort, Bertrand Thirion
https://papers.nips.cc/paper_files/paper/2020/hash/1359aa933b48b754a2f54adb688bfa77-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1359aa933b48b754a2f54adb688bfa77-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9873-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1359aa933b48b754a2f54adb688bfa77-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1359aa933b48b754a2f54adb688bfa77-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1359aa933b48b754a2f54adb688bfa77-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1359aa933b48b754a2f54adb688bfa77-Supplemental.zip
Detecting where and when brain regions activate in a cognitive task or in a given clinical condition is the promise of non-invasive techniques like magnetoencephalography (MEG) or electroencephalography (EEG). This problem, referred to as source localization, or source imaging, poses however a high-dimensional statisti...
A Scalable MIP-based Method for Learning Optimal Multivariate Decision Trees
https://papers.nips.cc/paper_files/paper/2020/hash/1373b284bc381890049e92d324f56de0-Abstract.html
Haoran Zhu, Pavankumar Murali, Dzung Phan, Lam Nguyen, Jayant Kalagnanam
https://papers.nips.cc/paper_files/paper/2020/hash/1373b284bc381890049e92d324f56de0-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1373b284bc381890049e92d324f56de0-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9874-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1373b284bc381890049e92d324f56de0-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1373b284bc381890049e92d324f56de0-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1373b284bc381890049e92d324f56de0-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1373b284bc381890049e92d324f56de0-Supplemental.pdf
Several recent publications report advances in training optimal decision trees (ODTs) using mixed-integer programs (MIPs), due to algorithmic advances in integer programming and a growing interest in addressing the inherent suboptimality of heuristic approaches such as CART. In this paper, we propose a novel MIP formul...
Efficient Exact Verification of Binarized Neural Networks
https://papers.nips.cc/paper_files/paper/2020/hash/1385974ed5904a438616ff7bdb3f7439-Abstract.html
Kai Jia, Martin Rinard
https://papers.nips.cc/paper_files/paper/2020/hash/1385974ed5904a438616ff7bdb3f7439-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1385974ed5904a438616ff7bdb3f7439-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9875-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1385974ed5904a438616ff7bdb3f7439-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1385974ed5904a438616ff7bdb3f7439-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1385974ed5904a438616ff7bdb3f7439-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1385974ed5904a438616ff7bdb3f7439-Supplemental.pdf
Concerned with the reliability of neural networks, researchers have developed verification techniques to prove their robustness. Most verifiers work with real-valued networks. Unfortunately, the exact (complete and sound) verifiers face scalability challenges and provide no correctness guarantees due to floating point ...
Ultra-Low Precision 4-bit Training of Deep Neural Networks
https://papers.nips.cc/paper_files/paper/2020/hash/13b919438259814cd5be8cb45877d577-Abstract.html
Xiao Sun, Naigang Wang, Chia-Yu Chen, Jiamin Ni, Ankur Agrawal, Xiaodong Cui, Swagath Venkataramani, Kaoutar El Maghraoui, Vijayalakshmi (Viji) Srinivasan, Kailash Gopalakrishnan
https://papers.nips.cc/paper_files/paper/2020/hash/13b919438259814cd5be8cb45877d577-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/13b919438259814cd5be8cb45877d577-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9876-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/13b919438259814cd5be8cb45877d577-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/13b919438259814cd5be8cb45877d577-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/13b919438259814cd5be8cb45877d577-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/13b919438259814cd5be8cb45877d577-Supplemental.pdf
In this paper, we propose a number of novel techniques and numerical representation formats that enable, for the very first time, the precision of training systems to be aggressively scaled from 8-bits to 4-bits. To enable this advance, we explore a novel adaptive Gradient Scaling technique (Gradscale) that addresses t...
Bridging the Gap between Sample-based and One-shot Neural Architecture Search with BONAS
https://papers.nips.cc/paper_files/paper/2020/hash/13d4635deccc230c944e4ff6e03404b5-Abstract.html
Han Shi, Renjie Pi, Hang Xu, Zhenguo Li, James Kwok, Tong Zhang
https://papers.nips.cc/paper_files/paper/2020/hash/13d4635deccc230c944e4ff6e03404b5-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/13d4635deccc230c944e4ff6e03404b5-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9877-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/13d4635deccc230c944e4ff6e03404b5-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/13d4635deccc230c944e4ff6e03404b5-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/13d4635deccc230c944e4ff6e03404b5-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/13d4635deccc230c944e4ff6e03404b5-Supplemental.zip
Neural Architecture Search (NAS) has shown great potentials in finding better neural network designs. Sample-based NAS is the most reliable approach which aims at exploring the search space and evaluating the most promising architectures. However, it is computationally very costly. As a remedy, the one-shot approach h...
On Numerosity of Deep Neural Networks
https://papers.nips.cc/paper_files/paper/2020/hash/13e36f06c66134ad65f532e90d898545-Abstract.html
Xi Zhang, Xiaolin Wu
https://papers.nips.cc/paper_files/paper/2020/hash/13e36f06c66134ad65f532e90d898545-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/13e36f06c66134ad65f532e90d898545-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9878-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/13e36f06c66134ad65f532e90d898545-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/13e36f06c66134ad65f532e90d898545-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/13e36f06c66134ad65f532e90d898545-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/13e36f06c66134ad65f532e90d898545-Supplemental.pdf
Recently, a provocative claim was published that number sense spontaneously emerges in a deep neural network trained merely for visual object recognition. This has, if true, far reaching significance to the fields of machine learning and cognitive science alike. In this paper, we prove the above claim to be unfortuna...
Outlier Robust Mean Estimation with Subgaussian Rates via Stability
https://papers.nips.cc/paper_files/paper/2020/hash/13ec9935e17e00bed6ec8f06230e33a9-Abstract.html
Ilias Diakonikolas, Daniel M. Kane, Ankit Pensia
https://papers.nips.cc/paper_files/paper/2020/hash/13ec9935e17e00bed6ec8f06230e33a9-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/13ec9935e17e00bed6ec8f06230e33a9-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9879-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/13ec9935e17e00bed6ec8f06230e33a9-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/13ec9935e17e00bed6ec8f06230e33a9-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/13ec9935e17e00bed6ec8f06230e33a9-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/13ec9935e17e00bed6ec8f06230e33a9-Supplemental.pdf
We study the problem of outlier robust high-dimensional mean estimation under a bounded covariance assumption, and more broadly under bounded low-degree moment assumptions. We consider a standard stability condition from the recent robust statistics literature and prove that, except with exponentially small failure pro...
Self-Supervised Relationship Probing
https://papers.nips.cc/paper_files/paper/2020/hash/13f320e7b5ead1024ac95c3b208610db-Abstract.html
Jiuxiang Gu, Jason Kuen, Shafiq Joty, Jianfei Cai, Vlad Morariu, Handong Zhao, Tong Sun
https://papers.nips.cc/paper_files/paper/2020/hash/13f320e7b5ead1024ac95c3b208610db-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/13f320e7b5ead1024ac95c3b208610db-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9880-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/13f320e7b5ead1024ac95c3b208610db-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/13f320e7b5ead1024ac95c3b208610db-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/13f320e7b5ead1024ac95c3b208610db-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/13f320e7b5ead1024ac95c3b208610db-Supplemental.pdf
Structured representations of images that model visual relationships are beneficial for many vision and vision-language applications. However, current human-annotated visual relationship datasets suffer from the long-tailed predicate distribution problem which limits the potential of visual relationship models. In this...
Information Theoretic Counterfactual Learning from Missing-Not-At-Random Feedback
https://papers.nips.cc/paper_files/paper/2020/hash/13f3cf8c531952d72e5847c4183e6910-Abstract.html
Zifeng Wang, Xi Chen, Rui Wen, Shao-Lun Huang, Ercan Kuruoglu, Yefeng Zheng
https://papers.nips.cc/paper_files/paper/2020/hash/13f3cf8c531952d72e5847c4183e6910-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/13f3cf8c531952d72e5847c4183e6910-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9881-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/13f3cf8c531952d72e5847c4183e6910-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/13f3cf8c531952d72e5847c4183e6910-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/13f3cf8c531952d72e5847c4183e6910-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/13f3cf8c531952d72e5847c4183e6910-Supplemental.zip
Counterfactual learning for dealing with missing-not-at-random data (MNAR) is an intriguing topic in the recommendation literature, since MNAR data are ubiquitous in modern recommender systems. Instead, missing-at-random (MAR) data, namely randomized controlled trials (RCTs), are usually required by most previous count...
Prophet Attention: Predicting Attention with Future Attention
https://papers.nips.cc/paper_files/paper/2020/hash/13fe9d84310e77f13a6d184dbf1232f3-Abstract.html
Fenglin Liu, Xuancheng Ren, Xian Wu, Shen Ge, Wei Fan, Yuexian Zou, Xu Sun
https://papers.nips.cc/paper_files/paper/2020/hash/13fe9d84310e77f13a6d184dbf1232f3-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/13fe9d84310e77f13a6d184dbf1232f3-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9882-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/13fe9d84310e77f13a6d184dbf1232f3-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/13fe9d84310e77f13a6d184dbf1232f3-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/13fe9d84310e77f13a6d184dbf1232f3-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/13fe9d84310e77f13a6d184dbf1232f3-Supplemental.pdf
Recently, attention based models have been used extensively in many sequence-to-sequence learning systems. Especially for image captioning, the attention based models are expected to ground correct image regions with proper generated words. However, for each time step in the decoding process, the attention based models...
Language Models are Few-Shot Learners
https://papers.nips.cc/paper_files/paper/2020/hash/1457c0d6bfcb4967418bfb8ac142f64a-Abstract.html
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, ...
https://papers.nips.cc/paper_files/paper/2020/hash/1457c0d6bfcb4967418bfb8ac142f64a-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9883-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Supplemental.pdf
We demonstrate that scaling up language models greatly improves task-agnostic, few-shot performance, sometimes even becoming competitive with prior state-of-the-art fine-tuning approaches. Specifically, we train GPT-3, an autoregressive language model with 175 billion parameters, 10x more than any previous non-sparse l...
Margins are Insufficient for Explaining Gradient Boosting
https://papers.nips.cc/paper_files/paper/2020/hash/146f7dd4c91bc9d80cf4458ad6d6cd1b-Abstract.html
Allan Grønlund, Lior Kamma, Kasper Green Larsen
https://papers.nips.cc/paper_files/paper/2020/hash/146f7dd4c91bc9d80cf4458ad6d6cd1b-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/146f7dd4c91bc9d80cf4458ad6d6cd1b-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9884-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/146f7dd4c91bc9d80cf4458ad6d6cd1b-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/146f7dd4c91bc9d80cf4458ad6d6cd1b-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/146f7dd4c91bc9d80cf4458ad6d6cd1b-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/146f7dd4c91bc9d80cf4458ad6d6cd1b-Supplemental.pdf
Boosting is one of the most successful ideas in machine learning, achieving great practical performance with little fine-tuning. The success of boosted classifiers is most often attributed to improvements in margins. The focus on margin explanations was pioneered in the seminal work by Schaphire et al. (1998) and has c...
Fourier-transform-based attribution priors improve the interpretability and stability of deep learning models for genomics
https://papers.nips.cc/paper_files/paper/2020/hash/1487987e862c44b91a0296cf3866387e-Abstract.html
Alex Tseng, Avanti Shrikumar, Anshul Kundaje
https://papers.nips.cc/paper_files/paper/2020/hash/1487987e862c44b91a0296cf3866387e-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1487987e862c44b91a0296cf3866387e-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9885-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1487987e862c44b91a0296cf3866387e-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1487987e862c44b91a0296cf3866387e-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1487987e862c44b91a0296cf3866387e-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1487987e862c44b91a0296cf3866387e-Supplemental.zip
Deep learning models can accurately map genomic DNA sequences to associated functional molecular readouts such as protein-DNA binding data. Base-resolution importance (i.e. "attribution") scores inferred from these models can highlight predictive sequence motifs and syntax. Unfortunately, these models are prone to over...
MomentumRNN: Integrating Momentum into Recurrent Neural Networks
https://papers.nips.cc/paper_files/paper/2020/hash/149ef6419512be56a93169cd5e6fa8fd-Abstract.html
Tan Nguyen, Richard Baraniuk, Andrea Bertozzi, Stanley Osher, Bao Wang
https://papers.nips.cc/paper_files/paper/2020/hash/149ef6419512be56a93169cd5e6fa8fd-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/149ef6419512be56a93169cd5e6fa8fd-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9886-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/149ef6419512be56a93169cd5e6fa8fd-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/149ef6419512be56a93169cd5e6fa8fd-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/149ef6419512be56a93169cd5e6fa8fd-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/149ef6419512be56a93169cd5e6fa8fd-Supplemental.pdf
Designing deep neural networks is an art that often involves an expensive search over candidate architectures. To overcome this for recurrent neural nets (RNNs), we establish a connection between the hidden state dynamics in an RNN and gradient descent (GD). We then integrate momentum into this framework and propose a ...
Marginal Utility for Planning in Continuous or Large Discrete Action Spaces
https://papers.nips.cc/paper_files/paper/2020/hash/14da15db887a4b50efe5c1bc66537089-Abstract.html
Zaheen Ahmad, Levi Lelis, Michael Bowling
https://papers.nips.cc/paper_files/paper/2020/hash/14da15db887a4b50efe5c1bc66537089-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/14da15db887a4b50efe5c1bc66537089-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9887-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/14da15db887a4b50efe5c1bc66537089-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/14da15db887a4b50efe5c1bc66537089-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/14da15db887a4b50efe5c1bc66537089-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/14da15db887a4b50efe5c1bc66537089-Supplemental.pdf
Sample-based planning is a powerful family of algorithms for generating intelligent behavior from a model of the environment. Generating good candidate actions is critical to the success of sample-based planners, particularly in continuous or large action spaces. Typically, candidate action generation exhausts the acti...
Projected Stein Variational Gradient Descent
https://papers.nips.cc/paper_files/paper/2020/hash/14faf969228fc18fcd4fcf59437b0c97-Abstract.html
Peng Chen, Omar Ghattas
https://papers.nips.cc/paper_files/paper/2020/hash/14faf969228fc18fcd4fcf59437b0c97-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/14faf969228fc18fcd4fcf59437b0c97-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9888-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/14faf969228fc18fcd4fcf59437b0c97-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/14faf969228fc18fcd4fcf59437b0c97-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/14faf969228fc18fcd4fcf59437b0c97-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/14faf969228fc18fcd4fcf59437b0c97-Supplemental.pdf
The curse of dimensionality is a longstanding challenge in Bayesian inference in high dimensions. In this work, we propose a {projected Stein variational gradient descent} (pSVGD) method to overcome this challenge by exploiting the fundamental property of intrinsic low dimensionality of the data informed subspace ste...
Minimax Lower Bounds for Transfer Learning with Linear and One-hidden Layer Neural Networks
https://papers.nips.cc/paper_files/paper/2020/hash/151d21647527d1079781ba6ae6571ffd-Abstract.html
Mohammadreza Mousavi Kalan, Zalan Fabian, Salman Avestimehr, Mahdi Soltanolkotabi
https://papers.nips.cc/paper_files/paper/2020/hash/151d21647527d1079781ba6ae6571ffd-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/151d21647527d1079781ba6ae6571ffd-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9889-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/151d21647527d1079781ba6ae6571ffd-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/151d21647527d1079781ba6ae6571ffd-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/151d21647527d1079781ba6ae6571ffd-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/151d21647527d1079781ba6ae6571ffd-Supplemental.pdf
Transfer learning has emerged as a powerful technique for improving the performance of machine learning models on new domains where labeled training data may be scarce. In this approach a model trained for a source task, where plenty of labeled training data is available, is used as a starting point for training a mode...
SE(3)-Transformers: 3D Roto-Translation Equivariant Attention Networks
https://papers.nips.cc/paper_files/paper/2020/hash/15231a7ce4ba789d13b722cc5c955834-Abstract.html
Fabian Fuchs, Daniel Worrall, Volker Fischer, Max Welling
https://papers.nips.cc/paper_files/paper/2020/hash/15231a7ce4ba789d13b722cc5c955834-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/15231a7ce4ba789d13b722cc5c955834-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9890-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/15231a7ce4ba789d13b722cc5c955834-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/15231a7ce4ba789d13b722cc5c955834-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/15231a7ce4ba789d13b722cc5c955834-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/15231a7ce4ba789d13b722cc5c955834-Supplemental.pdf
We introduce the SE(3)-Transformer, a variant of the self-attention module for 3D point-clouds, which is equivariant under continuous 3D roto-translations. Equivariance is important to ensure stable and predictable performance in the presence of nuisance transformations of the data input. A positive corollary of equiva...
On the equivalence of molecular graph convolution and molecular wave function with poor basis set
https://papers.nips.cc/paper_files/paper/2020/hash/1534b76d325a8f591b52d302e7181331-Abstract.html
Masashi Tsubaki, Teruyasu Mizoguchi
https://papers.nips.cc/paper_files/paper/2020/hash/1534b76d325a8f591b52d302e7181331-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1534b76d325a8f591b52d302e7181331-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9891-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1534b76d325a8f591b52d302e7181331-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1534b76d325a8f591b52d302e7181331-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1534b76d325a8f591b52d302e7181331-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1534b76d325a8f591b52d302e7181331-Supplemental.pdf
In this study, we demonstrate that the linear combination of atomic orbitals (LCAO), an approximation introduced by Pauling and Lennard-Jones in the 1920s, corresponds to graph convolutional networks (GCNs) for molecules. However, GCNs involve unnecessary nonlinearity and deep architecture. We also verify that molecula...
The Power of Predictions in Online Control
https://papers.nips.cc/paper_files/paper/2020/hash/155fa09596c7e18e50b58eb7e0c6ccb4-Abstract.html
Chenkai Yu, Guanya Shi, Soon-Jo Chung, Yisong Yue, Adam Wierman
https://papers.nips.cc/paper_files/paper/2020/hash/155fa09596c7e18e50b58eb7e0c6ccb4-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/155fa09596c7e18e50b58eb7e0c6ccb4-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9892-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/155fa09596c7e18e50b58eb7e0c6ccb4-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/155fa09596c7e18e50b58eb7e0c6ccb4-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/155fa09596c7e18e50b58eb7e0c6ccb4-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/155fa09596c7e18e50b58eb7e0c6ccb4-Supplemental.pdf
We study the impact of predictions in online Linear Quadratic Regulator control with both stochastic and adversarial disturbances in the dynamics. In both settings, we characterize the optimal policy and derive tight bounds on the minimum cost and dynamic regret. Perhaps surprisingly, our analysis shows that the conven...
Learning Affordance Landscapes for Interaction Exploration in 3D Environments
https://papers.nips.cc/paper_files/paper/2020/hash/15825aee15eb335cc13f9b559f166ee8-Abstract.html
Tushar Nagarajan, Kristen Grauman
https://papers.nips.cc/paper_files/paper/2020/hash/15825aee15eb335cc13f9b559f166ee8-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/15825aee15eb335cc13f9b559f166ee8-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9893-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/15825aee15eb335cc13f9b559f166ee8-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/15825aee15eb335cc13f9b559f166ee8-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/15825aee15eb335cc13f9b559f166ee8-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/15825aee15eb335cc13f9b559f166ee8-Supplemental.zip
Embodied agents operating in human spaces must be able to master how their environment works: what objects can the agent use, and how can it use them? We introduce a reinforcement learning approach for exploration for interaction, whereby an embodied agent autonomously discovers the affordance landscape of a new unmapp...
Cooperative Multi-player Bandit Optimization
https://papers.nips.cc/paper_files/paper/2020/hash/15ae3b9d6286f1b2a489ea4f3f4abaed-Abstract.html
Ilai Bistritz, Nicholas Bambos
https://papers.nips.cc/paper_files/paper/2020/hash/15ae3b9d6286f1b2a489ea4f3f4abaed-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/15ae3b9d6286f1b2a489ea4f3f4abaed-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9894-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/15ae3b9d6286f1b2a489ea4f3f4abaed-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/15ae3b9d6286f1b2a489ea4f3f4abaed-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/15ae3b9d6286f1b2a489ea4f3f4abaed-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/15ae3b9d6286f1b2a489ea4f3f4abaed-Supplemental.pdf
Consider a team of cooperative players that take actions in a networked-environment. At each turn, each player chooses an action and receives a reward that is an unknown function of all the players' actions. The goal of the team of players is to learn to play together the action profile that maximizes the sum of their ...
Tight First- and Second-Order Regret Bounds for Adversarial Linear Bandits
https://papers.nips.cc/paper_files/paper/2020/hash/15bb63b28926cd083b15e3b97567bbea-Abstract.html
Shinji Ito, Shuichi Hirahara, Tasuku Soma, Yuichi Yoshida
https://papers.nips.cc/paper_files/paper/2020/hash/15bb63b28926cd083b15e3b97567bbea-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/15bb63b28926cd083b15e3b97567bbea-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9895-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/15bb63b28926cd083b15e3b97567bbea-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/15bb63b28926cd083b15e3b97567bbea-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/15bb63b28926cd083b15e3b97567bbea-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/15bb63b28926cd083b15e3b97567bbea-Supplemental.pdf
We propose novel algorithms with first- and second-order regret bounds for adversarial linear bandits. These regret bounds imply that our algorithms perform well when there is an action achieving a small cumulative loss or the loss has a small variance. In addition, we need only assumptions weaker than those of existin...
Just Pick a Sign: Optimizing Deep Multitask Models with Gradient Sign Dropout
https://papers.nips.cc/paper_files/paper/2020/hash/16002f7a455a94aa4e91cc34ebdb9f2d-Abstract.html
Zhao Chen, Jiquan Ngiam, Yanping Huang, Thang Luong, Henrik Kretzschmar, Yuning Chai, Dragomir Anguelov
https://papers.nips.cc/paper_files/paper/2020/hash/16002f7a455a94aa4e91cc34ebdb9f2d-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/16002f7a455a94aa4e91cc34ebdb9f2d-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9896-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/16002f7a455a94aa4e91cc34ebdb9f2d-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/16002f7a455a94aa4e91cc34ebdb9f2d-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/16002f7a455a94aa4e91cc34ebdb9f2d-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/16002f7a455a94aa4e91cc34ebdb9f2d-Supplemental.pdf
The vast majority of deep models use multiple gradient signals, typically corresponding to a sum of multiple loss terms, to update a shared set of trainable weights. However, these multiple updates can impede optimal training by pulling the model in conflicting directions. We present Gradient Sign Dropout (GradDrop), a...
A Loss Function for Generative Neural Networks Based on Watson’s Perceptual Model
https://papers.nips.cc/paper_files/paper/2020/hash/165a59f7cf3b5c4396ba65953d679f17-Abstract.html
Steffen Czolbe, Oswin Krause, Ingemar Cox, Christian Igel
https://papers.nips.cc/paper_files/paper/2020/hash/165a59f7cf3b5c4396ba65953d679f17-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/165a59f7cf3b5c4396ba65953d679f17-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9897-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/165a59f7cf3b5c4396ba65953d679f17-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/165a59f7cf3b5c4396ba65953d679f17-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/165a59f7cf3b5c4396ba65953d679f17-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/165a59f7cf3b5c4396ba65953d679f17-Supplemental.pdf
To train Variational Autoencoders (VAEs) to generate realistic imagery requires a loss function that reflects human perception of image similarity. We propose such a loss function based on Watson's perceptual model, which computes a weighted distance in frequency space and accounts for luminance and contrast masking. ...
Dynamic Fusion of Eye Movement Data and Verbal Narrations in Knowledge-rich Domains
https://papers.nips.cc/paper_files/paper/2020/hash/16837163fee34175358a47e0b51485ff-Abstract.html
Ervine Zheng, Qi Yu, Rui Li, Pengcheng Shi, Anne Haake
https://papers.nips.cc/paper_files/paper/2020/hash/16837163fee34175358a47e0b51485ff-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/16837163fee34175358a47e0b51485ff-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9898-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/16837163fee34175358a47e0b51485ff-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/16837163fee34175358a47e0b51485ff-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/16837163fee34175358a47e0b51485ff-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/16837163fee34175358a47e0b51485ff-Supplemental.pdf
We propose to jointly analyze experts' eye movements and verbal narrations to discover important and interpretable knowledge patterns to better understand their decision-making processes. The discovered patterns can further enhance data-driven statistical models by fusing experts' domain knowledge to support complex hu...
Scalable Multi-Agent Reinforcement Learning for Networked Systems with Average Reward
https://papers.nips.cc/paper_files/paper/2020/hash/168efc366c449fab9c2843e9b54e2a18-Abstract.html
Guannan Qu, Yiheng Lin, Adam Wierman, Na Li
https://papers.nips.cc/paper_files/paper/2020/hash/168efc366c449fab9c2843e9b54e2a18-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/168efc366c449fab9c2843e9b54e2a18-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9899-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/168efc366c449fab9c2843e9b54e2a18-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/168efc366c449fab9c2843e9b54e2a18-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/168efc366c449fab9c2843e9b54e2a18-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/168efc366c449fab9c2843e9b54e2a18-Supplemental.pdf
It has long been recognized that multi-agent reinforcement learning (MARL) faces significant scalability issues due to the fact that the size of the state and action spaces are exponentially large in the number of agents. In this paper, we identify a rich class of networked MARL problems where the model exhibits a loca...
Optimizing Neural Networks via Koopman Operator Theory
https://papers.nips.cc/paper_files/paper/2020/hash/169806bb68ccbf5e6f96ddc60c40a044-Abstract.html
Akshunna S. Dogra, William Redman
https://papers.nips.cc/paper_files/paper/2020/hash/169806bb68ccbf5e6f96ddc60c40a044-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/169806bb68ccbf5e6f96ddc60c40a044-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9900-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/169806bb68ccbf5e6f96ddc60c40a044-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/169806bb68ccbf5e6f96ddc60c40a044-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/169806bb68ccbf5e6f96ddc60c40a044-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/169806bb68ccbf5e6f96ddc60c40a044-Supplemental.pdf
Koopman operator theory, a powerful framework for discovering the underlying dynamics of nonlinear dynamical systems, was recently shown to be intimately connected with neural network training. In this work, we take the first steps in making use of this connection. As Koopman operator theory is a linear theory, a succe...
SVGD as a kernelized Wasserstein gradient flow of the chi-squared divergence
https://papers.nips.cc/paper_files/paper/2020/hash/16f8e136ee5693823268874e58795216-Abstract.html
Sinho Chewi, Thibaut Le Gouic, Chen Lu, Tyler Maunu, Philippe Rigollet
https://papers.nips.cc/paper_files/paper/2020/hash/16f8e136ee5693823268874e58795216-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/16f8e136ee5693823268874e58795216-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9901-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/16f8e136ee5693823268874e58795216-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/16f8e136ee5693823268874e58795216-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/16f8e136ee5693823268874e58795216-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/16f8e136ee5693823268874e58795216-Supplemental.pdf
Stein Variational Gradient Descent (SVGD), a popular sampling algorithm, is often described as the kernelized gradient flow for the Kullback-Leibler divergence in the geometry of optimal transport. We introduce a new perspective on SVGD that instead views SVGD as the kernelized gradient flow of the chi-squared divergen...
Adversarial Robustness of Supervised Sparse Coding
https://papers.nips.cc/paper_files/paper/2020/hash/170f6aa36530c364b77ddf83a84e7351-Abstract.html
Jeremias Sulam, Ramchandran Muthukumar, Raman Arora
https://papers.nips.cc/paper_files/paper/2020/hash/170f6aa36530c364b77ddf83a84e7351-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/170f6aa36530c364b77ddf83a84e7351-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9902-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/170f6aa36530c364b77ddf83a84e7351-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/170f6aa36530c364b77ddf83a84e7351-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/170f6aa36530c364b77ddf83a84e7351-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/170f6aa36530c364b77ddf83a84e7351-Supplemental.pdf
Several recent results provide theoretical insights into the phenomena of adversarial examples. Existing results, however, are often limited due to a gap between the simplicity of the models studied and the complexity of those deployed in practice. In this work, we strike a better balance by considering a model that in...
Differentiable Meta-Learning of Bandit Policies
https://papers.nips.cc/paper_files/paper/2020/hash/171ae1bbb81475eb96287dd78565b38b-Abstract.html
Craig Boutilier, Chih-wei Hsu, Branislav Kveton, Martin Mladenov, Csaba Szepesvari, Manzil Zaheer
https://papers.nips.cc/paper_files/paper/2020/hash/171ae1bbb81475eb96287dd78565b38b-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/171ae1bbb81475eb96287dd78565b38b-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9903-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/171ae1bbb81475eb96287dd78565b38b-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/171ae1bbb81475eb96287dd78565b38b-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/171ae1bbb81475eb96287dd78565b38b-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/171ae1bbb81475eb96287dd78565b38b-Supplemental.pdf
Exploration policies in Bayesian bandits maximize the average reward over problem instances drawn from some distribution P. In this work, we learn such policies for an unknown distribution P using samples from P. Our approach is a form of meta-learning and exploits properties of P without making strong assumptions abou...
Biologically Inspired Mechanisms for Adversarial Robustness
https://papers.nips.cc/paper_files/paper/2020/hash/17256f049f1e3fede17c7a313f7657f4-Abstract.html
Manish Reddy Vuyyuru, Andrzej Banburski, Nishka Pant, Tomaso Poggio
https://papers.nips.cc/paper_files/paper/2020/hash/17256f049f1e3fede17c7a313f7657f4-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/17256f049f1e3fede17c7a313f7657f4-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9904-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/17256f049f1e3fede17c7a313f7657f4-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/17256f049f1e3fede17c7a313f7657f4-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/17256f049f1e3fede17c7a313f7657f4-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/17256f049f1e3fede17c7a313f7657f4-Supplemental.pdf
A convolutional neural network strongly robust to adversarial perturbations at reasonable computational and performance cost has not yet been demonstrated. The primate visual ventral stream seems to be robust to small perturbations in visual stimuli but the underlying mechanisms that give rise to this robust perception...
Statistical-Query Lower Bounds via Functional Gradients
https://papers.nips.cc/paper_files/paper/2020/hash/17257e81a344982579af1ae6415a7b8c-Abstract.html
Surbhi Goel, Aravind Gollakota, Adam Klivans
https://papers.nips.cc/paper_files/paper/2020/hash/17257e81a344982579af1ae6415a7b8c-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/17257e81a344982579af1ae6415a7b8c-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9905-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/17257e81a344982579af1ae6415a7b8c-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/17257e81a344982579af1ae6415a7b8c-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/17257e81a344982579af1ae6415a7b8c-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/17257e81a344982579af1ae6415a7b8c-Supplemental.pdf
We give the first statistical-query lower bounds for agnostically learning any non-polynomial activation with respect to Gaussian marginals (e.g., ReLU, sigmoid, sign). For the specific problem of ReLU regression (equivalently, agnostically learning a ReLU), we show that any statistical-query algorithm with tolerance ...
Near-Optimal Reinforcement Learning with Self-Play
https://papers.nips.cc/paper_files/paper/2020/hash/172ef5a94b4dd0aa120c6878fc29f70c-Abstract.html
Yu Bai, Chi Jin, Tiancheng Yu
https://papers.nips.cc/paper_files/paper/2020/hash/172ef5a94b4dd0aa120c6878fc29f70c-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/172ef5a94b4dd0aa120c6878fc29f70c-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9906-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/172ef5a94b4dd0aa120c6878fc29f70c-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/172ef5a94b4dd0aa120c6878fc29f70c-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/172ef5a94b4dd0aa120c6878fc29f70c-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/172ef5a94b4dd0aa120c6878fc29f70c-Supplemental.pdf
This paper considers the problem of designing optimal algorithms for reinforcement learning in two-player zero-sum games. We focus on self-play algorithms which learn the optimal policy by playing against itself without any direct supervision. In a tabular episodic Markov game with S states, A max-player actions and B ...
Network Diffusions via Neural Mean-Field Dynamics
https://papers.nips.cc/paper_files/paper/2020/hash/1730f69e6f66d5f0c741799e82351f81-Abstract.html
Shushan He, Hongyuan Zha, Xiaojing Ye
https://papers.nips.cc/paper_files/paper/2020/hash/1730f69e6f66d5f0c741799e82351f81-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1730f69e6f66d5f0c741799e82351f81-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9907-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1730f69e6f66d5f0c741799e82351f81-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1730f69e6f66d5f0c741799e82351f81-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1730f69e6f66d5f0c741799e82351f81-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1730f69e6f66d5f0c741799e82351f81-Supplemental.pdf
We propose a novel learning framework based on neural mean-field dynamics for inference and estimation problems of diffusion on networks. Our new framework is derived from the Mori-Zwanzig formalism to obtain an exact evolution of the node infection probabilities, which renders a delay differential equation with memory...
Self-Distillation as Instance-Specific Label Smoothing
https://papers.nips.cc/paper_files/paper/2020/hash/1731592aca5fb4d789c4119c65c10b4b-Abstract.html
Zhilu Zhang, Mert Sabuncu
https://papers.nips.cc/paper_files/paper/2020/hash/1731592aca5fb4d789c4119c65c10b4b-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1731592aca5fb4d789c4119c65c10b4b-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9908-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1731592aca5fb4d789c4119c65c10b4b-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1731592aca5fb4d789c4119c65c10b4b-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1731592aca5fb4d789c4119c65c10b4b-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1731592aca5fb4d789c4119c65c10b4b-Supplemental.pdf
It has been recently demonstrated that multi-generational self-distillation can improve generalization. Despite this intriguing observation, reasons for the enhancement remain poorly understood. In this paper, we first demonstrate experimentally that the improved performance of multi-generational self-distillation is i...
Towards Problem-dependent Optimal Learning Rates
https://papers.nips.cc/paper_files/paper/2020/hash/174f8f613332b27e9e8a5138adb7e920-Abstract.html
Yunbei Xu, Assaf Zeevi
https://papers.nips.cc/paper_files/paper/2020/hash/174f8f613332b27e9e8a5138adb7e920-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/174f8f613332b27e9e8a5138adb7e920-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9909-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/174f8f613332b27e9e8a5138adb7e920-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/174f8f613332b27e9e8a5138adb7e920-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/174f8f613332b27e9e8a5138adb7e920-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/174f8f613332b27e9e8a5138adb7e920-Supplemental.pdf
We study problem-dependent rates, i.e., generalization errors that scale tightly with the variance or the effective loss at the "best hypothesis." Existing uniform convergence and localization frameworks, the most widely used tools to study this problem, often fail to simultaneously provide parameter localization and...
Cross-lingual Retrieval for Iterative Self-Supervised Training
https://papers.nips.cc/paper_files/paper/2020/hash/1763ea5a7e72dd7ee64073c2dda7a7a8-Abstract.html
Chau Tran, Yuqing Tang, Xian Li, Jiatao Gu
https://papers.nips.cc/paper_files/paper/2020/hash/1763ea5a7e72dd7ee64073c2dda7a7a8-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1763ea5a7e72dd7ee64073c2dda7a7a8-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9910-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1763ea5a7e72dd7ee64073c2dda7a7a8-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1763ea5a7e72dd7ee64073c2dda7a7a8-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1763ea5a7e72dd7ee64073c2dda7a7a8-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1763ea5a7e72dd7ee64073c2dda7a7a8-Supplemental.pdf
Recent studies have demonstrated the cross-lingual alignment ability of multilingual pretrained language models. In this work, we found that the cross-lingual alignment can be further improved by training seq2seq models on sentence pairs mined using their own encoder outputs. We utilized these findings to develop a new...
Rethinking pooling in graph neural networks
https://papers.nips.cc/paper_files/paper/2020/hash/1764183ef03fc7324eb58c3842bd9a57-Abstract.html
Diego Mesquita, Amauri Souza, Samuel Kaski
https://papers.nips.cc/paper_files/paper/2020/hash/1764183ef03fc7324eb58c3842bd9a57-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1764183ef03fc7324eb58c3842bd9a57-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9911-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1764183ef03fc7324eb58c3842bd9a57-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1764183ef03fc7324eb58c3842bd9a57-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1764183ef03fc7324eb58c3842bd9a57-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1764183ef03fc7324eb58c3842bd9a57-Supplemental.pdf
Graph pooling is a central component of a myriad of graph neural network (GNN) architectures. As an inheritance from traditional CNNs, most approaches formulate graph pooling as a cluster assignment problem, extending the idea of local patches in regular grids to graphs. Despite the wide adherence to this design choice...
Pointer Graph Networks
https://papers.nips.cc/paper_files/paper/2020/hash/176bf6219855a6eb1f3a30903e34b6fb-Abstract.html
Petar Veličković, Lars Buesing, Matthew Overlan, Razvan Pascanu, Oriol Vinyals, Charles Blundell
https://papers.nips.cc/paper_files/paper/2020/hash/176bf6219855a6eb1f3a30903e34b6fb-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/176bf6219855a6eb1f3a30903e34b6fb-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9912-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/176bf6219855a6eb1f3a30903e34b6fb-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/176bf6219855a6eb1f3a30903e34b6fb-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/176bf6219855a6eb1f3a30903e34b6fb-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/176bf6219855a6eb1f3a30903e34b6fb-Supplemental.zip
Graph neural networks (GNNs) are typically applied to static graphs that are assumed to be known upfront. This static input structure is often informed purely by insight of the machine learning practitioner, and might not be optimal for the actual task the GNN is solving. In absence of reliable domain expertise, one mi...
Gradient Regularized V-Learning for Dynamic Treatment Regimes
https://papers.nips.cc/paper_files/paper/2020/hash/17b3c7061788dbe82de5abe9f6fe22b3-Abstract.html
Yao Zhang, Mihaela van der Schaar
https://papers.nips.cc/paper_files/paper/2020/hash/17b3c7061788dbe82de5abe9f6fe22b3-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/17b3c7061788dbe82de5abe9f6fe22b3-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9913-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/17b3c7061788dbe82de5abe9f6fe22b3-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/17b3c7061788dbe82de5abe9f6fe22b3-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/17b3c7061788dbe82de5abe9f6fe22b3-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/17b3c7061788dbe82de5abe9f6fe22b3-Supplemental.zip
Deciding how to optimally treat a patient, including how to select treatments over time among the multiple available treatments, represents one of the most important issues that need to be addressed in medicine today. A dynamic treatment regime (DTR) is a sequence of treatment rules indicating how to individualize trea...
Faster Wasserstein Distance Estimation with the Sinkhorn Divergence
https://papers.nips.cc/paper_files/paper/2020/hash/17f98ddf040204eda0af36a108cbdea4-Abstract.html
Lénaïc Chizat, Pierre Roussillon, Flavien Léger, François-Xavier Vialard, Gabriel Peyré
https://papers.nips.cc/paper_files/paper/2020/hash/17f98ddf040204eda0af36a108cbdea4-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/17f98ddf040204eda0af36a108cbdea4-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9914-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/17f98ddf040204eda0af36a108cbdea4-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/17f98ddf040204eda0af36a108cbdea4-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/17f98ddf040204eda0af36a108cbdea4-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/17f98ddf040204eda0af36a108cbdea4-Supplemental.pdf
The squared Wasserstein distance is a natural quantity to compare probability distributions in a non-parametric setting. This quantity is usually estimated with the plug-in estimator, defined via a discrete optimal transport problem which can be solved to $\epsilon$-accuracy by adding an entropic regularization of orde...
Forethought and Hindsight in Credit Assignment
https://papers.nips.cc/paper_files/paper/2020/hash/18064d61b6f93dab8681a460779b8429-Abstract.html
Veronica Chelu, Doina Precup, Hado P. van Hasselt
https://papers.nips.cc/paper_files/paper/2020/hash/18064d61b6f93dab8681a460779b8429-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/18064d61b6f93dab8681a460779b8429-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9915-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/18064d61b6f93dab8681a460779b8429-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/18064d61b6f93dab8681a460779b8429-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/18064d61b6f93dab8681a460779b8429-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/18064d61b6f93dab8681a460779b8429-Supplemental.pdf
We address the problem of credit assignment in reinforcement learning and explore fundamental questions regarding the way in which an agent can best use additional computation to propagate new information, by planning with internal models of the world to improve its predictions. Particularly, we work to understand the ...
Robust Recursive Partitioning for Heterogeneous Treatment Effects with Uncertainty Quantification
https://papers.nips.cc/paper_files/paper/2020/hash/1819020b02e926785cf3be594d957696-Abstract.html
Hyun-Suk Lee, Yao Zhang, William Zame, Cong Shen, Jang-Won Lee, Mihaela van der Schaar
https://papers.nips.cc/paper_files/paper/2020/hash/1819020b02e926785cf3be594d957696-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1819020b02e926785cf3be594d957696-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9916-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1819020b02e926785cf3be594d957696-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1819020b02e926785cf3be594d957696-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1819020b02e926785cf3be594d957696-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1819020b02e926785cf3be594d957696-Supplemental.pdf
Subgroup analysis of treatment effects plays an important role in applications from medicine to public policy to recommender systems. It allows physicians (for example) to identify groups of patients for whom a given drug or treatment is likely to be effective and groups of patients for which it is not. Most of the c...
Rescuing neural spike train models from bad MLE
https://papers.nips.cc/paper_files/paper/2020/hash/186b690e29892f137b4c34cfa40a3a4d-Abstract.html
Diego Arribas, Yuan Zhao, Il Memming Park
https://papers.nips.cc/paper_files/paper/2020/hash/186b690e29892f137b4c34cfa40a3a4d-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/186b690e29892f137b4c34cfa40a3a4d-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9917-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/186b690e29892f137b4c34cfa40a3a4d-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/186b690e29892f137b4c34cfa40a3a4d-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/186b690e29892f137b4c34cfa40a3a4d-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/186b690e29892f137b4c34cfa40a3a4d-Supplemental.pdf
The standard approach to fitting an autoregressive spike train model is to maximize the likelihood for one-step prediction. This maximum likelihood estimation (MLE) often leads to models that perform poorly when generating samples recursively for more than one time step. Moreover, the generated spike trains can fail to...
Lower Bounds and Optimal Algorithms for Personalized Federated Learning
https://papers.nips.cc/paper_files/paper/2020/hash/187acf7982f3c169b3075132380986e4-Abstract.html
Filip Hanzely, Slavomír Hanzely, Samuel Horváth, Peter Richtarik
https://papers.nips.cc/paper_files/paper/2020/hash/187acf7982f3c169b3075132380986e4-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/187acf7982f3c169b3075132380986e4-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9918-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/187acf7982f3c169b3075132380986e4-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/187acf7982f3c169b3075132380986e4-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/187acf7982f3c169b3075132380986e4-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/187acf7982f3c169b3075132380986e4-Supplemental.zip
In this work, we consider the optimization formulation of personalized federated learning recently introduced by Hanzely & Richtarik (2020) which was shown to give an alternative explanation to the workings of local SGD methods. Our first contribution is establishing the first lower bounds for this formulation, for bot...
Black-Box Certification with Randomized Smoothing: A Functional Optimization Based Framework
https://papers.nips.cc/paper_files/paper/2020/hash/1896a3bf730516dd643ba67b4c447d36-Abstract.html
Dinghuai Zhang, Mao Ye, Chengyue Gong, Zhanxing Zhu, Qiang Liu
https://papers.nips.cc/paper_files/paper/2020/hash/1896a3bf730516dd643ba67b4c447d36-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1896a3bf730516dd643ba67b4c447d36-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9919-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1896a3bf730516dd643ba67b4c447d36-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1896a3bf730516dd643ba67b4c447d36-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1896a3bf730516dd643ba67b4c447d36-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1896a3bf730516dd643ba67b4c447d36-Supplemental.pdf
Randomized classifiers have been shown to provide a promising approach for achieving certified robustness against adversarial attacks in deep learning. However, most existing methods only leverage Gaussian smoothing noise and only work for $\ell_2$ perturbation. We propose a general framework of adversarial certificati...
Deep Imitation Learning for Bimanual Robotic Manipulation
https://papers.nips.cc/paper_files/paper/2020/hash/18a010d2a9813e91907ce88cd9143fdf-Abstract.html
Fan Xie, Alexander Chowdhury, M. Clara De Paolis Kaluza, Linfeng Zhao, Lawson Wong, Rose Yu
https://papers.nips.cc/paper_files/paper/2020/hash/18a010d2a9813e91907ce88cd9143fdf-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/18a010d2a9813e91907ce88cd9143fdf-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9920-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/18a010d2a9813e91907ce88cd9143fdf-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/18a010d2a9813e91907ce88cd9143fdf-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/18a010d2a9813e91907ce88cd9143fdf-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/18a010d2a9813e91907ce88cd9143fdf-Supplemental.zip
We present a deep imitation learning framework for robotic bimanual manipulation in a continuous state-action space. A core challenge is to generalize the manipulation skills to objects in different locations. We hypothesize that modeling the relational information in the environment can significantly improve generali...
Stationary Activations for Uncertainty Calibration in Deep Learning
https://papers.nips.cc/paper_files/paper/2020/hash/18a411989b47ed75a60ac69d9da05aa5-Abstract.html
Lassi Meronen, Christabella Irwanto, Arno Solin
https://papers.nips.cc/paper_files/paper/2020/hash/18a411989b47ed75a60ac69d9da05aa5-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/18a411989b47ed75a60ac69d9da05aa5-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9921-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/18a411989b47ed75a60ac69d9da05aa5-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/18a411989b47ed75a60ac69d9da05aa5-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/18a411989b47ed75a60ac69d9da05aa5-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/18a411989b47ed75a60ac69d9da05aa5-Supplemental.pdf
We introduce a new family of non-linear neural network activation functions that mimic the properties induced by the widely-used Mat\'ern family of kernels in Gaussian process (GP) models. This class spans a range of locally stationary models of various degrees of mean-square differentiability. We show an explicit lin...
Ensemble Distillation for Robust Model Fusion in Federated Learning
https://papers.nips.cc/paper_files/paper/2020/hash/18df51b97ccd68128e994804f3eccc87-Abstract.html
Tao Lin, Lingjing Kong, Sebastian U. Stich, Martin Jaggi
https://papers.nips.cc/paper_files/paper/2020/hash/18df51b97ccd68128e994804f3eccc87-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/18df51b97ccd68128e994804f3eccc87-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9922-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/18df51b97ccd68128e994804f3eccc87-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/18df51b97ccd68128e994804f3eccc87-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/18df51b97ccd68128e994804f3eccc87-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/18df51b97ccd68128e994804f3eccc87-Supplemental.pdf
In this work we investigate more powerful and more flexible aggregation schemes for FL. Specifically, we propose ensemble distillation for model fusion, i.e. training the central classifier through unlabeled data on the outputs of the models from the clients. This knowledge distillation technique mitigates privacy risk...
Falcon: Fast Spectral Inference on Encrypted Data
https://papers.nips.cc/paper_files/paper/2020/hash/18fc72d8b8aba03a4d84f66efabce82e-Abstract.html
Qian Lou, Wen-jie Lu, Cheng Hong, Lei Jiang
https://papers.nips.cc/paper_files/paper/2020/hash/18fc72d8b8aba03a4d84f66efabce82e-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/18fc72d8b8aba03a4d84f66efabce82e-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9923-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/18fc72d8b8aba03a4d84f66efabce82e-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/18fc72d8b8aba03a4d84f66efabce82e-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/18fc72d8b8aba03a4d84f66efabce82e-Review.html
null
Homomorphic Encryption (HE) based secure Neural Networks(NNs) inference is one of the most promising security solutions to emerging Machine Learning as a Service (MLaaS). In the HE-based MLaaS setting, a client encrypts the sensitive data, and uploads the encrypted data to the server that directly processes the encrypt...
On Power Laws in Deep Ensembles
https://papers.nips.cc/paper_files/paper/2020/hash/191595dc11b4d6e54f01504e3aa92f96-Abstract.html
Ekaterina Lobacheva, Nadezhda Chirkova, Maxim Kodryan, Dmitry P. Vetrov
https://papers.nips.cc/paper_files/paper/2020/hash/191595dc11b4d6e54f01504e3aa92f96-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/191595dc11b4d6e54f01504e3aa92f96-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9924-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/191595dc11b4d6e54f01504e3aa92f96-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/191595dc11b4d6e54f01504e3aa92f96-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/191595dc11b4d6e54f01504e3aa92f96-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/191595dc11b4d6e54f01504e3aa92f96-Supplemental.pdf
Ensembles of deep neural networks are known to achieve state-of-the-art performance in uncertainty estimation and lead to accuracy improvement. In this work, we focus on a classification problem and investigate the behavior of both non-calibrated and calibrated negative log-likelihood (CNLL) of a deep ensemble as a fun...