forum_id
stringlengths
9
13
conference
stringclasses
1 value
year
int32
2.02k
2.02k
track
stringclasses
3 values
venue_id
stringclasses
6 values
paper_number
int32
1
29.3k
title
stringlengths
14
183
abstract
stringlengths
246
3.6k
authors
listlengths
0
56
keywords
listlengths
0
32
tldr
stringlengths
0
250
primary_area
stringclasses
38 values
venue
stringclasses
19 values
decision
stringclasses
8 values
decision_comment
stringlengths
0
9.28k
author_rebuttal
stringlengths
0
6k
num_reviews
int32
2
8
reviews_json
stringlengths
1.96k
78.7k
markdown
stringlengths
0
1.05M
markdown_chars
int64
0
1.05M
zsq86HNvXr6
neurips
2,021
main
NeurIPS.cc/2021/Conference
5,843
Last iterate convergence of SGD for Least-Squares in the Interpolation regime.
Motivated by the recent successes of neural networks that have the ability to fit the data perfectly \emph{and} generalize well, we study the noiseless model in the fundamental least-squares setup. We assume that an optimum predictor perfectly fits the inputs and outputs $\langle \theta_* , \phi(X) \rangle = Y$, where ...
[ "Aditya Vardhan Varre", "Loucas Pillaud-Vivien", "Nicolas Flammarion" ]
[ "Interpolation", "Least-Squares", "SGD", "Last iterate", "Non-parametric rates." ]
NeurIPS 2021 Poster
Accept (Poster)
This paper proves the risk bound of the last iterate for constant step size SGD in the interpolation regime. The main concern from the reviewers is that the linear regression model is assumed to be noiseless, which makes the results less interesting. After the author response and reviewer discussion, the paper gathers ...
4
[{"review_id": "pvBSopsbAgL", "reviewer": "Reviewer_Mkow", "summary": "The paper explores the last iterate convergence of SGD on the least squares problem in the interpolation regime (noiseless setting). The authors work in an abstract Hilbert space and thus consider (1) the non-strongly convex setting and (2) the stre...
Last iterate convergence of SGD for Least-Squares in the Interpolation regime Aditya Varre Loucas Pillaud-Vivien EPFL aditya. var.epaaa.reeeepl..co EPFL loucas www.a..ppllladdd...aaeael..... Nicolas Flammarion EPFL nicolas www.mamioo@epplllc...m Abstract Motivated by the recent successes of neural networks that have th...
42,221
zOngaSKrElL
neurips
2,021
main
NeurIPS.cc/2021/Conference
4,410
Self-Supervised Bug Detection and Repair
Machine learning-based program analyses have recently shown the promise of integrating formal and probabilistic reasoning towards aiding software development. However, in the absence of large annotated corpora, training these analyses is challenging. Towards addressing this, we present BugLab, an approach for self-supe...
[ "Miltiadis Allamanis", "Henry Richard Jackson-Flux", "Marc Brockschmidt" ]
[ "ml4code", "bug detection", "gnn" ]
NeurIPS 2021 Poster
Accept (Poster)
The paper provides a novel approach to train a bug detector by co-training a bug injection procedure together with the bug detector. This is an interesting idea, and while the resulting bug detector has a high number of false positives, it was able to find new bugs in PyPI packages. (19 of 1000 reported bugs turned out...
3
[{"review_id": "uokunvYoEee", "reviewer": "Reviewer_XqDo", "summary": "Authors propose self-supervised learning approach that trains two models: one to detect and repair bugs in code and another one to generate code with difficult bugs. They use their trained model to improve results on baseline test dataset and to fin...
Self-Supervised Bug Detection and Repair Miltiadis Allamanis, Henry Jackson-Flux, Marc Brockschmidt Microsoft Research, Cambridge, UK (miallama, mabrocks (oaicckssmmiiiccce (0microoott.com Abstract Machine learning-based program analyses have recently shown the promise of integrating formal and probabilistic reasoning ...
47,489
zweDnxxWRe
neurips
2,021
main
NeurIPS.cc/2021/Conference
6,809
Multi-task Learning of Order-Consistent Causal Graphs
We consider the problem of discovering $K$ related Gaussian directed acyclic graphs (DAGs), where the involved graph structures share a consistent causal order and sparse unions of supports. Under the multi-task learning setting, we propose a $l_1/l_2$-regularized maximum likelihood estimator (MLE) for learning $K$ lin...
[ "Xinshi Chen", "Haoran Sun", "Caleb Ellington", "Eric Xing", "Le Song" ]
[ "Causal discovery", "DAG estimation", "multi-task learning", "graphical models" ]
NeurIPS 2021 Poster
Accept (Poster)
This was a very borderline paper. After the author response and discussion, the one reviewer recommending rejection raised their score to a weak accept. Please make sure to read all of the reviewer feedback carefully and make suggested/promised changes for the camera ready. In particular, reviewer x4HL had some additio...
3
[{"review_id": "cncp70Fm6wi", "reviewer": "Reviewer_x4HL", "summary": "The paper considers structure discovery of multiple but related DAGs in the sense that they share a common causal order. Authors propose an l1/l2-regularized MLE for multiple linear Gaussian models and design a continuous optimization method to obta...
Multi-task Learning of Order-Consistent Causal Graphs Xinshi Chen" Haoran Sun Georgia Institute Technology Georgia Institute of Technology xinshi. cwwn@gatech. edu haoransunggatech.. edu Caleb Ellington Carnegie Mellon University cellingt@cs, cmu. edu Eric Xing Carnegie Mellon University Le Song BioMap MBZUAI MBZUAI er...
43,849
zkHlu_3sJYU
neurips
2,021
main
NeurIPS.cc/2021/Conference
1,590
SWAD: Domain Generalization by Seeking Flat Minima
Domain generalization (DG) methods aim to achieve generalizability to an unseen target domain by using only training data from the source domains. Although a variety of DG methods have been proposed, a recent study shows that under a fair evaluation protocol, called DomainBed, the simple empirical risk minimization (ER...
[ "Junbum Cha", "Sanghyuk Chun", "Kyungjae Lee", "Han-Cheol Cho", "Seunghyun Park", "Yunsung Lee", "Sungrae Park" ]
[ "domain generalization", "robustness", "data distribution shift", "domain shift", "flatness minimization", "sharpness minimization", "generalization" ]
NeurIPS 2021 Poster
Accept (Poster)
This paper proposes Stochastic Weight Averaging Densely (SWAD) to improve domain generalization. As the name suggests, it is a modification of Stochastic Weight Averaging (SWA) by averaging more densely (every iteration). Reviewers mostly agree on strength and weaknesses of this work. They all believe that this work i...
4
[{"review_id": "OEUIkBI_712", "reviewer": "Reviewer_1Nc7", "summary": "This paper extends the idea of Stochastic Weighting Averaging (SWA) to the context of Domain Generalization (DG). Theoretically, the authors argues that finding a flat minima improves domain generalization. Algorithmically, the authors modify SWA by...
SWAD: Domain Generalization by Seeking Flat Minima Junbum Chalt Sanghyuk Chun Seunghyun Park Kyungjae Lee Han-Cheol Cho Yunsung Lee Sungrae Park Kakao Brain NAVER AI Lab Chung-Ang University 6 Upstage AI Research NAVER Clova Korea University Abstract Domain generalization (DG) methods aim to achieve generalizability to...
50,950
zmVumB1Flg
neurips
2,021
main
NeurIPS.cc/2021/Conference
1,896
Universal Semi-Supervised Learning
Universal Semi-Supervised Learning (UniSSL) aims to solve the open-set problem where both the class distribution (i.e., class set) and feature distribution (i.e., feature domain) are different between labeled dataset and unlabeled dataset. Such a problem seriously hinders the realistic landing of classical SSL. Differe...
[ "Zhuo Huang", "Chao Xue", "Bo Han", "Jian Yang", "Chen Gong" ]
[ "Semi-Supervised Learning" ]
NeurIPS 2021 Poster
Accept (Poster)
This paper studies universal semi-supervised learning problem. In this setting, class distribution and feature distribution both mismatch between labeled and unlabeled data. The authors propose an algorithm CAFA and demonstrate its effectiveness on several benchmark data sets. The reviewers agree that the paper provide...
4
[{"review_id": "gKb7T6FRnDn", "reviewer": "Reviewer_v3uq", "summary": "This paper studies the universal semi-supervised learning, which combines traditional SSL and domain adaptation with the open-set problem. Under this setting, two issues are raised as the technical challenge, i.e., the class distribution shift and f...
Universal Semi-Supervised Learning Zhuo Huang' 1.2.3 Chao Xue Bo Han", Jian Yang!?? 1:2 Chen Gong' 1,2,3 PCA Lab, Key Lab of Intelligent Perception and Systems for High-Ding Bigh-Dinensioal Information of Ministry of Education 2 JJiangsu Key Lab of Image and Video Understanding for Social Security, School of Computer S...
43,410
zdmF437BCB
neurips
2,021
main
NeurIPS.cc/2021/Conference
8,165
Domain Adaptation with Invariant Representation Learning: What Transformations to Learn?
Unsupervised domain adaptation, as a prevalent transfer learning setting, spans many real-world applications. With the increasing representational power and applicability of neural networks, state-of-the-art domain adaptation methods make use of deep architectures to map the input features $X$ to a latent representatio...
[ "Petar Stojanov", "Zijian Li", "Mingming Gong", "Ruichu Cai", "Jaime G. Carbonell", "Kun Zhang" ]
[ "Domain adaptation", "transfer learning", "deep learning", "adversarial training", "autencoders" ]
NeurIPS 2021 Poster
Accept (Poster)
The authors study unsupervised domain adaptation. They point out that if the data supports overlap, the encoder might need domain-specific information to recover an invariant representation which elicits a high-accuracy classifier. They propose a method which models this domain-specific information as a latent variable...
4
[{"review_id": "zneRShUmUtO", "reviewer": "Reviewer_DkiW", "summary": "This paper focuses on making use of latent invariant representations for unsupervised domain adaptation. The goal is to address two State-Of-The-Art existing methods issues. The first is there is no principled way to ensure that marginal invariance ...
Domain Adaptation with Invariant Representation Learning: What Transformations to Learn? Petar Stojanov Zijian Li Mingming Gong", Ruichu Cai Jaime G. Carbonell', Kun Zhang' Carnegie Mellon University School of Computer Science, Guangdong University of Technology School of Mathematics and Statistics, University of Melbo...
46,723
zwkj1_pxFM
neurips
2,021
main
NeurIPS.cc/2021/Conference
5,161
A nonparametric method for gradual change problems with statistical guarantees
We consider the detection and localization of gradual changes in the distribution of a sequence of time-ordered observations. Existing literature focuses mostly on the simpler abrupt setting which assumes a discontinuity jump in distribution, and is unrealistic for some applied settings. We propose a general method for...
[ "Lizhen Nie", "Dan L Nicolae" ]
[ "Change point detection", "gradual change", "nonparametric" ]
NeurIPS 2021 Poster
Accept (Poster)
As mentioned by the reviewers the paper tackles an exciting problem. The authors are also very knowledgeable on the topic, and the proposal sounds interesting. However, the paper requires a bit more work in order to be accepted at a top conference or journal. Here are a few suggestions for a future submission of the pa...
4
[{"review_id": "mkuGwgZD2jQ", "reviewer": "Reviewer_MgSk", "summary": "Paper proposed a kernel-based method to detect/localize graduate changing points in time series data. Paper claims flexibility compared to the existing methods and targeted some issues raised in the literature. Paper provides theoretical analysis an...
A nonparametric method for gradual change problems with statistical guarantees Lizhen Nie Department of Statistics The University of Chicago 1i lizhendstatistics.. wuw.sssiiiccc.ucc........o. edu Dan Nicolae Department of Statistics The University of Chicago nicolae@statisticc wchicago...........o edu Abstract We consi...
46,498
zdC5eXljMPy
neurips
2,021
main
NeurIPS.cc/2021/Conference
11,385
Weighted model estimation for offline model-based reinforcement learning
This paper discusses model estimation in offline model-based reinforcement learning (MBRL), which is important for subsequent policy improvement using an estimated model. From the viewpoint of covariate shift, a natural idea is model estimation weighted by the ratio of the state-action distributions of offline data and...
[ "Toru Hishinuma", "Kei Senda" ]
[ "model-based reinforcement learning" ]
NeurIPS 2021 Poster
Accept (Poster)
The reviewers find the problem addressed by the paper important and the proposed idea novel. The most important negative aspect of the paper, however, is with its empirical studies: There are several issues there: 1) Although the empirical results for the simpler environment (Pendulum) is promising, the results for m...
4
[{"review_id": "w3S_H827T0I", "reviewer": "Reviewer_mD7J", "summary": "The paper presents a model-based policy optimization algorithm for offline reinforcement learning. To account for the mismatch between the available dataset and the experience used during training, the proposed method uses estimation of the density ...
Weighted model estimation for offline model-based reinforcement learning Toru Hishinuma Kei Senda Kyoto University Kyoto University hishinuma. toru. 43.0kyoto-u.jp sendadkuaero. kyoto- jp Abstract This paper discusses model estimation in offline model-based reinforcement learn ing (MBRL), which is important for subsequ...
35,522
zL1szwVKdwc
neurips
2,021
main
NeurIPS.cc/2021/Conference
8,053
The Elastic Lottery Ticket Hypothesis
Lottery Ticket Hypothesis (LTH) raises keen attention to identifying sparse trainable subnetworks, or winning tickets, which can be trained in isolation to achieve similar or even better performance compared to the full models. Despite many efforts being made, the most effective method to identify such winning tickets ...
[ "Xiaohan Chen", "Yu Cheng", "Shuohang Wang", "Zhe Gan", "Jingjing Liu", "Zhangyang Wang" ]
[ "Lottery Ticket Hypothesis" ]
NeurIPS 2021 Poster
Accept (Poster)
This paper investigates whether lottery tickets can transfer across architectures of different depths, following up on prior work which showed transfer across datasets for the same architecture. Reviewers all found the clarity and originality to be strong and the topic is of significant interest. There were some concer...
3
[{"review_id": "z-xAF6o__OV", "reviewer": "Reviewer_r4hi", "summary": "This paper focused on exploring the transferability of a subnetwork obtained from the given dense network. Following the Lottery Ticket Hypothesis (LTH), a new hypothesis called Elastic Lottery Ticket Hypothesis (E-LTH) was proposed along with a cor...
The Elastic Lottery Ticket Hypothesis Xiaohan Chen'* I+ Yu Cheng² Shuohang Wang Zhangyang Wang' Zhe Gan Jingjing Liu University of Texas at Austin Microsoot Corporation TTiinguua University (xiaohan. chen, atlaswang lyu.cheng, shuowa, zhe chee.ann @mi crosoft. com JJLiuiudair. tsinghua. edu. on edu Abstract Lottery Tic...
50,040
zqo2sqixxbE
neurips
2,021
main
NeurIPS.cc/2021/Conference
7,462
Asymptotically Best Causal Effect Identification with Multi-Armed Bandits
This paper considers the problem of selecting a formula for identifying a causal quantity of interest among a set of available formulas. We assume an online setting in which the investigator may alter the data collection mechanism in a data-dependent way with the aim of identifying the formula with lowest asymptotic va...
[ "Alan Malek", "Silvia Chiappa" ]
[ "causal identification formulas", "frontdoor criterion", "adjustment criterion", "causal inference", "multi armed bandits", "best-arm-identification", "confidence sequences", "double machine learning", "causal effect estimator", "nuisance parameter" ]
NeurIPS 2021 Poster
Accept (Poster)
This paper used best-arm identifications to select the best estimator for identifying a causal effect. While the techniques are not new, the reviewers and the AC believe the result in this paper is an important first step. The AC also recommends that the authors add discussions on recent theoretical results (e.g., fin...
3
[{"review_id": "nls2Ktw69Qo", "reviewer": "Reviewer_QgMM", "summary": "This paper aims to select the best causal formula among a set of available formulas based on the asymptotic variance criterion. An online setting with best-arm-identification framework is considered. The paper poses the sequential decision problem o...
Asymptotically Best Causal Effect Identification with Multi-Armed Bandits Alan Malek Silvia Chiappa DeepMind London cs reilvia@deepmind.comm com Deep London alaamaalexOdeeepnndd. com Abstract This paper considers the problem of selecting formula for identifying a causal quantity of interest among set of available formu...
40,449
zjJyjQj1W7U
neurips
2,021
main
NeurIPS.cc/2021/Conference
10,489
Hierarchical Clustering: $O(1)$-Approximation for Well-Clustered Graphs
Hierarchical clustering studies a recursive partition of a data set into clusters of successively smaller size, and is a fundamental problem in data analysis. In this work we study the cost function for hierarchical clustering introduced by Dasgupta, and present two polynomial-time approximation algorithms: Our first ...
[ "Bogdan Adrian Manghiuc", "He Sun" ]
[ "Hierarchical clustering", "graph algorithms", "spectral methods" ]
NeurIPS 2021 Poster
Accept (Poster)
The authors present a new hierarchical algorithm to cluster graphs. The main result showed in the paper is that the new algorithm returns a constant approximation for a well-clustered graph. The paper is a clear extension of previous work in NeurIPS and it presents interesting theoretical results. One shortcoming of th...
4
[{"review_id": "vmpqcRRdGXQ", "reviewer": "Reviewer_sgob", "summary": "The authors study the important problem of clustering graph in a hierarchical way optimizing and present novel algorithms for the special case of graphs with expender structure or well clusterable graphs, The algorithms have state of the art approxi...
Hierarchical Clustering: for Well-Clustered Graphs* Bogdan-Adrian Manghiuc He Sun School of Informatics The University of Edinburgh h. sun@ed. ac. uk School of Informatics The University of Edinburgh b. mww.miinnnsssss....m.m... ed. ac. Abstract Hierarchical clustering studies a recursive partition of a data set into c...
43,098
zzSVN5x8JiX1m
neurips
2,021
main
NeurIPS.cc/2021/Conference
11,656
On the Role of Optimization in Double Descent: A Least Squares Study
Empirically it has been observed that the performance of deep neural networks steadily improves with increased model size, contradicting the classical view on overfitting and generalization. Recently, the double descent phenomenon has been proposed to reconcile this observation with theory, suggesting that the test err...
[ "Ilja Kuzborskij", "Csaba Szepesvari", "Omar Rivasplata", "Amal Rannen-Triki", "Razvan Pascanu" ]
[ "Double Descent", "Optimization Error", "Excess Risk", "Generalization", "Least Squares" ]
NeurIPS 2021 Poster
Accept (Poster)
Reviewers reached a consensus by which the paper is not ready for publication. Several concerns were raised, most notably: * Only an upper bound on the excess risk is derived, which is insufficient for establishing a double descent phenomenon (for that, a matching lower bound is required). * While experiments indica...
3
[{"review_id": "wStH974jaR_", "reviewer": "Reviewer_XaVR", "summary": "This paper studies the double descent phenomenon of linear least squares problems from an optimization point of view. Specifically, the contributions of the paper lie in two parts: (1) for a linear least squares problem, an upper bound for the exce...
On the Role of Optimization in Double Descent: A Least Squares Study Iija Kuzborskij DeepMind Csaba Szepesvári University of Alberta DeepMind Edmonton, Canada Omar Rivasplata University College London Amal Rannen-Triki DeepMind Razvan Pascanu DeepMind Abstract Empirically has been observed that the performance of deep ...
38,852
zaqGp90Od4y
neurips
2,021
main
NeurIPS.cc/2021/Conference
1,657
Boost Neural Networks by Checkpoints
Training multiple deep neural networks (DNNs) and averaging their outputs is a simple way to improve the predictive performance. Nevertheless, the multiplied training cost prevents this ensemble method to be practical and efficient. Several recent works attempt to save and ensemble the checkpoints of DNNs, which only r...
[ "Feng Wang", "Guoyizhe Wei", "Qiao Liu", "Jinxiang Ou", "Xian Wei", "Hairong Lv" ]
[ "Ensembles", "Boosting", "Deep Neural Networks" ]
NeurIPS 2021 Poster
Accept (Poster)
I recommend to accept this paper. In this paper, the authors proposed a boosting method to ensemble checkpoints during the training of neural networks called Checkpoint-Boosted Neural Network (CBNN) to improve the performance. In particular, a boosting scheme with both theoretical guarantee and empirical justification...
3
[{"review_id": "tEZWN6N9a30", "reviewer": "Reviewer_Ca6w", "summary": "This paper proposed to ensemble the checkpoint models during the training to boost the performance of neural networks. To improve diversity over checkpoints, the proposed method changes the weights for each sample based on the mis-classification rat...
Boost Neural Networks by Checkpoints Feng Wang', Guoyizhe Wei', Qiao Liu", Jinxiang Ou', Xian Wei Hairong Ly Department of Automation, Tsinghua University 'Department Statistics. Stanford University Software Engineering Institute, East China Normal University "Fuzhou Institute of Data Technology (wangf19, wgyz19, ojx19...
35,852
zcrC_XDUFd
neurips
2,021
main
NeurIPS.cc/2021/Conference
10,820
Learning with Holographic Reduced Representations
Holographic Reduced Representations (HRR) are a method for performing symbolic AI on top of real-valued vectors by associating each vector with an abstract concept, and providing mathematical operations to manipulate vectors as if they were classic symbolic objects. This method has seen little use outside of older symb...
[ "Ashwinkumar Ganesan", "Hang Gao", "Sunil Gandhi", "Edward Raff", "Tim Oates", "James Holt", "Mark McLean" ]
[ "Holographic Reduced Representations", "binding", "neuro-symbolic" ]
NeurIPS 2021 Spotlight
Accept (Spotlight)
This paper shows how HRRs can be modified to enable end-to-end differentiable training at scale. While the results are not compelling that such representations will be useful in applications, the reviewers all agreed that the technical method is novel and provides a foundation for other systems to build on.
3
[{"review_id": "X331kVXXlaP", "reviewer": "Reviewer_M8pZ", "summary": "This paper presents a modification to the HRR along with a novel loss that permits efficient learning in large output spaces. The approach is demonstrated on a number of extreme classification tasks and this appears to be the first work that shows t...
Learning with Holographic Reduced Representations Ashwinkumar Ganesan' 1 Hang Gao', Sunil ; Edward Raff!-2,, Tim Oates! James Holt", Mark McLean2 l University of Maryland, Baltimore County Laboratory for Physical Sciences Booz Allen Hamilton Abstract Holographic Reduced Representations (HRR) method for performing symbo...
58,616
zmbiQmdtg9
neurips
2,021
main
NeurIPS.cc/2021/Conference
93
Improved Transformer for High-Resolution GANs
Attention-based models, exemplified by the Transformer, can effectively model long range dependency, but suffer from the quadratic complexity of self-attention operation, making them difficult to be adopted for high-resolution image generation based on Generative Adversarial Networks (GANs). In this paper, we introduce...
[ "Long Zhao", "Zizhao Zhang", "Ting Chen", "Dimitris N. Metaxas", "Han Zhang" ]
[ "Image generation", "Transformers", "Generative adversarial networks" ]
NeurIPS 2021 Poster
Accept (Poster)
The paper proposes a transformer-based generative model that shows better results on multiple standard benchmarks. The model includes multi-axis blocked self-attention at early stages and uses MLP for late stages. The reviewers appreciate the idea of a transformer-based generator, clear writing, and experimental result...
5
[{"review_id": "nuhWYIg2Hhp", "reviewer": "Reviewer_QCfs", "summary": "To address the quadratic complexity of the self-attention operation, this paper proposes a new Transformer-based generator for high-resolution image generation based on GANs, denoted as HiT. In the low-resolution stage, the authors propose a multi-a...
Improved Transformer for High-Resolution GANs Long Zh Zizhao Zhang² Ting Chen Dimitris N. Metaxas' Han Zhang? Rutgers University Google Cloud AI Google Research Abstract Attention-base models, exemplified by the Transformer, can effectively model long range dependency, but suffer from the quadratic complexity of self-a...
54,951
zMZPDwm3H3
neurips
2,021
main
NeurIPS.cc/2021/Conference
10,535
Learning the optimal Tikhonov regularizer for inverse problems
In this work, we consider the linear inverse problem $y=Ax+\varepsilon$, where $A\colon X\to Y$ is a known linear operator between the separable Hilbert spaces $X$ and $Y$, $x$ is a random variable in $X$ and $\epsilon$ is a zero-mean random process in $Y$. This setting covers several inverse problems in imaging includ...
[ "Giovanni S Alberti", "Ernesto De Vito", "Matti Lassas", "Luca Ratti", "Matteo Santacesaria" ]
[ "inverse problems", "regularization", "MMSE", "optimal estimation", "supervised and unsupervised learning", "generalization estimates", "Hilbert spaces" ]
NeurIPS 2021 Poster
Accept (Poster)
This paper studies a basic problem in optimal estimation, whose answer is well-known in the finite-dimensional case, and extends the characterization to inifinite-dimensional settings. In particular suppose we are given an observation of the form $A x + \epsilon$ and our goal is to estimate $x$ and achieve minimum mean...
4
[{"review_id": "yKdzgo6jMI9", "reviewer": "Reviewer_SbHe", "summary": "The paper considers solving linear inverse problems of the form $y=Ax+\\epsilon$, where $A:X\\to X$ denotes a bounded linear operator. The setup is a Bayesian setup with known distribution of source and noise. To solve this problem the paper focuses...
Learning the optimal Tikhonov regularizer for inverse problems Giovanni Alberti Ernesto De Vito MaLGa Center, Department of Mathematics University of Genoa, Italy MaLGa Center, Department of Mathematics University of Genoa, Italy ernesto. dew.ittooeeittouuug.....t giovanni.. alberti Saberti@unn interrt@@iig... Matti La...
40,211
zdTW91r2wKO
neurips
2,021
main
NeurIPS.cc/2021/Conference
9,848
Active 3D Shape Reconstruction from Vision and Touch
Humans build 3D understandings of the world through active object exploration, using jointly their senses of vision and touch. However, in 3D shape reconstruction, most recent progress has relied on static datasets of limited sensory data such as RGB images, depth maps or haptic readings, leaving the active exploration...
[ "Edward J. Smith", "David Meger", "Luis Pineda", "Roberto Calandra", "Jitendra Malik", "Adriana Romero", "Michal Drozdzal" ]
[ "active touch", "active sensing", "3D reconstruction", "3D perception", "robotics" ]
NeurIPS 2021 Poster
Accept (Poster)
The paper proposes a learning-based approach to 3D reconstruction using a combination of (optional) visual and tactile measurements. A modified neural network takes as input visual and tactile observations and outputs an estimate of the object's 3D shape. This network is coupled with an active perception policy that ch...
4
[{"review_id": "zoZzaoPsTpS", "reviewer": "Reviewer_Ksky", "summary": "The paper presents data-driven exploration strategies for 3D reconstruction using visual and tactile data, a haptic simulator and a mesh-based 3D shape reconstruction model.\n", "questions": "", "limitations": "", "rating": 5, "confidence": 4, "soun...
Active 3D Shape Reconstruction from Vision and Touch Edward Smith 1.2+ David Meger? Luis Pineda' Roberto Calandra' Jitendra Malik Adriana Romero-Soriano 1.2. Michal Drozdzal 1,t Facebook Al Research McGill University University of California, Berkeley Abstract Humans build 3D understandings of the world through active ...
58,767
zoQJBVrhnn3
neurips
2,021
main
NeurIPS.cc/2021/Conference
5,876
Learning to Safely Exploit a Non-Stationary Opponent
In dynamic multi-player games, an effective way to exploit an opponent's weaknesses is to build a perfectly accurate opponent model. This renders the learning problem a single-agent optimization which can be solved by typical reinforcement learning. However, naive behavior cloning may not suffice to train an exploiting...
[ "Zheng Tian", "Hang Ren", "Yaodong Yang", "Yuchen Sun", "Ziqi Han", "Ian Davies", "Jun Wang" ]
[ "multi-agent learning", "reinforcement learning", "opponent modeling" ]
NeurIPS 2021 Submitted
Reject
There was a detailed discussion about this paper, and the relationship to Bard et al. was controversial, with some reviewers considering this a necessary benchmark and others agreeing with the authors that it isn't (necessarily). Since it's controversial, I'm inclined to not make the decision based on the Bard et al. ...
4
[{"review_id": "yxFkhuxyzNE", "reviewer": "Reviewer_8Pgk", "summary": "The paper proposes to use the Dirichlet process for opponent modelling in games. It argues that the approach is particularly suitable for non-stationary opponents. It then proposes an algorithm to compute a counter-strategy to the model based on the...
Learning to Safely Exploit a Non-Stationary Opponent Anonymous Author(s) Affiliation Address email Abstract In dynamic multi-player games, effective way to exploit opponent's 's weak- nesses is to build a perfectly accurate opponent model. This renders the learning problem single-agent optimization which can be solved ...
49,658
zJynVlnoObx
neurips
2,021
main
NeurIPS.cc/2021/Conference
4,428
Exploiting Local Convergence of Quasi-Newton Methods Globally: Adaptive Sample Size Approach
In this paper, we study the application of quasi-Newton methods for solving empirical risk minimization (ERM) problems defined over a large dataset. Traditional deterministic and stochastic quasi-Newton methods can be executed to solve such problems; however, it is known that their global convergence rate may not be be...
[ "Qiujiang Jin", "Aryan Mokhtari" ]
[ "Quasi-Newton methods", "adaptive sample size methods", "second-order methods", "large-scale optimization." ]
NeurIPS 2021 Poster
Accept (Poster)
This paper focuses on characterizing rates of convergence for Quasi-Newton method applied to empirical risk minimization. Traditional stochastic Quasi-Newton methods have not enjoyed guarantees that are faster than their first order counterparts. This paper uses an adaptive sampling scheme to achieve global superlinear...
3
[{"review_id": "nCMbRif0Cv", "reviewer": "Reviewer_4SN8", "summary": "This paper \n- introduces a new quasi-Newton (QN) algorithm called AdaQN to solve the ERM problem. This method is adaptive since it increases the sample size after few (3) runs of BFGS on a small ERM subproblem. \n- gives its corresponding convergenc...
Exploiting Local Convergence of Quasi-Newton Methods Globally: Adaptive Sample Size Approach Qiujiang Jin Aryan Mokhtari ECE Department The University of Texas Austin ECE Department The University of Texas at Austin Austin, TX 78712, USA qiujiangdaustin. utexas.e..... edu Austin, TX 78712, USA mokhtari@austin utwxas.ee...
46,929
z9Xs6T0y9Eg
neurips
2,021
main
NeurIPS.cc/2021/Conference
7,742
Improved Guarantees for Offline Stochastic Matching via new Ordered Contention Resolution Schemes
Matching is one of the most fundamental and broadly applicable problems across many domains. In these diverse real-world applications, there is often a degree of uncertainty in the input which has led to the study of stochastic matching models. Here, each edge in the graph has a known, independent probability of existi...
[ "Brian Brubach", "Nathaniel Grammel", "Will Ma", "Aravind Srinivasan" ]
[ "Stochastic Optimization", "Combinatorial Optimization", "Discrete Optimization", "Approximation Algorithms", "Matching", "Stochastic Matching", "Prophet Inequality", "Prophet Secretary", "Contention Resolution" ]
NeurIPS 2021 Poster
Accept (Poster)
All of the reviewers liked this paper and felt that it should be accepted.
4
[{"review_id": "tdfKLSetoD6", "reviewer": "Reviewer_1hJb", "summary": "The paper considers a version of the stochastic matching problem. We have a graph with known weights on the edges but where each edge is present with probability p_e (also known). Sequentially, a decision-maker has to probe edges. When DM probes an ...
Improved Guarantees for Offline Stochastic Matching via New Ordered Contention Resolution Schemes Brian Brubach Nathaniel Grammel Computer Science Department Wellesley College Wellesley, MA 02481 bb100@wellesley.com edu Department of Computer Science University of Maryland College Park, MD 20742 ngrammel 1@udd edu Will...
43,349
zEuLFJCRk4X
neurips
2,021
main
NeurIPS.cc/2021/Conference
10,724
Imitating Deep Learning Dynamics via Locally Elastic Stochastic Differential Equations
Understanding the training dynamics of deep learning models is perhaps a necessary step toward demystifying the effectiveness of these models. In particular, how do training data from different classes gradually become separable in their feature spaces when training neural networks using stochastic gradient descent? In...
[ "Jiayao Zhang", "Hua Wang", "Weijie J Su" ]
[ "deep learning", "stochastic differential equations", "ordinary differential equations", "local elasticity" ]
NeurIPS 2021 Poster
Accept (Poster)
This paper studies the training dynamics of neural network by considering a stochastic differential equation model based on the local elasticity assumption of features of neural networks. The proposed model is justified by numerical experiments and sheds new light on the training dynamics. Overall the referees all find...
4
[{"review_id": "kXbNBc_N8k3", "reviewer": "Reviewer_imQZ", "summary": "The paper proposes a proxy tractable dynamics to study how the latent features of neural networks evolve inter- and intra-class during training. The core of this dynamics is a linear ODE, which is shown to exhibit inter-class separability once a “lo...
Imitating Deep Learning Dynamics via Locally Elastic Stochastic Differential Equations Jiayao Zhang Hua Wang Weijie J. Su University Pennsylvania {zjiayao, vanghua, (aaraassswwwwaaaaa upenn. edu Abstract Understanding the training dynamics of deep learning models perhaps necessary step toward demystifying the effective...
41,000
zzdf0CirJM4
neurips
2,021
main
NeurIPS.cc/2021/Conference
11,001
Batch Active Learning at Scale
The ability to train complex and highly effective models often requires an abundance of training data, which can easily become a bottleneck in cost, time, and computational resources. Batch active learning, which adaptively issues batched queries to a labeling oracle, is a common approach for addressing this problem. T...
[ "Gui Citovsky", "Giulia DeSalvo", "Claudio Gentile", "Lazaros Karydas", "Anand Rajagopalan", "Afshin Rostamizadeh", "Sanjiv Kumar" ]
[ "active learning", "large scale", "batch active learning" ]
NeurIPS 2021 Poster
Accept (Poster)
A majority of reviewers voted for acceptance (including the ones shown as official reviewers and an emergency reviewer that I included last minute). The only reviewer voting for rejection is reviewer YihA, which seems to be too negative without strong justifications given that most of his claims are not supported by th...
3
[{"review_id": "j_UAA5BRei1", "reviewer": "Reviewer_m1qg", "summary": "The paper introduces a new active learning acquisition function, Cluster-Margin, which clusters the samples with the lowest confidence scores (margin scores), and selects samples round-robin from the clusters to obtain a diverse set of low-confidenc...
Batch Active Learning at Scale Gui Citovsky, Giulia DeSalvo, Claudio Gentile, Lazaros Karydas, Anand Rajagopalan, Afshin Rostamizadch, Sanjiv Kumar Google Research Secittvssy,ggulline cggnniiill peeenile,,karre ananddr s sawijsjsajoooggle.ccoo Abstract The ability train complex and highly effective models often require...
44,540
z71OSKqTFh7
neurips
2,021
main
NeurIPS.cc/2021/Conference
1,151
A Universal Law of Robustness via Isoperimetry
Classically, data interpolation with a parametrized model class is possible as long as the number of parameters is larger than the number of equations to be satisfied. A puzzling phenomenon in the current practice of deep learning is that models are trained with many more parameters than what this classical theory woul...
[ "Sebastien Bubeck", "Mark Sellke" ]
[ "adversarial robustness", "over-parametrization", "isoperimetry" ]
NeurIPS 2021 Oral
Accept (Oral)
This paper adds an interesting ingredient to the substantial literature on interpolating / memorizing data using neural networks by asking how things change if the neural network is required to be smooth (Lipschitz). The results essentially show that there is a cost to smoothness, appearing as a multiplicative factor o...
4
[{"review_id": "zZ8gCQz5KJk", "reviewer": "Reviewer_Z64o", "summary": "It is generally possible to interpolate train data as long as the number of parameters of our model exceeds the number of training points. The present paper asks how many parameters do we need if we want to interpolate in a smooth manner. If the smo...
A Universal Law of Robustness via Isoperimetry Sébastien Bubeck Mark Sellke Stanford University Microsoft Research sebubeckkmiccoosof..com msel edu Abstract Classically, data interpolation with parametrized model class is possible as long as the number of parameters is larger than the number of equations to be satis- f...
38,946
z4L8_Egn5Ey
neurips
2,021
main
NeurIPS.cc/2021/Conference
4,739
Global Convergence to Local Minmax Equilibrium in Classes of Nonconvex Zero-Sum Games
We study gradient descent-ascent learning dynamics with timescale separation ($\tau$-GDA) in unconstrained continuous action zero-sum games where the minimizing player faces a nonconvex optimization problem and the maximizing player optimizes a Polyak-Lojasiewicz (PL) or strongly-concave (SC) objective. In contrast to ...
[ "Tanner Fiez", "Lillian J Ratliff", "Eric Mazumdar", "Evan Faulkner", "Adhyyan Narang" ]
[ "zero-sum games", "nonconvex-strongly-concave", "nonconvex-PL", "game theory", "dynamical systems", "stability" ]
NeurIPS 2021 Poster
Accept (Poster)
The authors study the behavior of simultaneous gradient descent-ascent with timescale separation (i.e. a slower learning rate for the descent dynamics compared to the ascent dynamics) to solve unconstrained and sequential min_x max_y f(x,y) optimization problems for objectives f(x,y) that are nonconvex wrt the variable...
3
[{"review_id": "l1BjRP-HFX", "reviewer": "Reviewer_kefj", "summary": "The authors focus on the analysis of gradient descent ascent dynamics with timescale separation to solve unconstrained min-max optimization for landscapes that are non-convex on the min player (say x) and strongly concave or satisfy the PL condition ...
Global Convergence to Local Minmax Equilibrium in Classes of Nonconvex Zero-Sum Games Tanner Fiez Lillian J. Ratliff Eric Mazumdar University of Washington fiezt@uw. edu University of Washington ratliffl@uw. edu California Institute of Technology mazumdar@cal tech. edu Evan Faulkner Adhyyan Narang University of Washing...
55,957
zImiB39pyUL
neurips
2,021
main
NeurIPS.cc/2021/Conference
9,024
BooVAE: Boosting Approach for Continual Learning of VAE
Variational autoencoder (VAE) is a deep generative model for unsupervised learning, allowing to encode observations into the meaningful latent space. VAE is prone to catastrophic forgetting when tasks arrive sequentially, and only the data for the current one is available. We address this problem of continual learning ...
[ "Evgenii Egorov", "Anna Kuzina", "Evgeny Burnaev" ]
[ "Continual Learning", "Variational Autoencoder", "VAE", "Catastrophic Forgetting", "Entropy regularisation", "Boosting", "Functional regularisation" ]
NeurIPS 2021 Poster
Accept (Poster)
The submission proposes a learnable prior for continual learning of a VAE to retain performance on previous tasks. The reviewers were unanimous that the paper is above the threshold for acceptance at NeurIPS. Quoting from Reviewer T9Fg "The idea is quite innovative. It is a significant improvement over coreset select...
4
[{"review_id": "yq4HFbTlWkJ", "reviewer": "Reviewer_T9Fg", "summary": "This manuscript proposes a continual learning algorithm for VAE. The encoder q(z|x) and decoder p(x|z) are apparently untouched when new tasks are learned; instead, continual learning here focuses on the prior pi(z). The prior is adapted to approx...
Boo VAE: Boosting Approach for Continual Learning of VAE Evgenii Egorov University of Amsterdam egorov. gvoroveevgenyy@aaruu Anna Kuzina Evgeny Burnaev Skoltech, AIRI Vrije Universiteit www.zziaayaande...uu burnaev@skklteech. ru Abstract Variational autoencoder (VAE) is deep generative model for unsupervised learn- ing...
41,754
zGsRcuoR5-0
neurips
2,021
main
NeurIPS.cc/2021/Conference
2,007
Sample Selection with Uncertainty of Losses for Learning with Noisy Labels
In learning with noisy labels, the sample selection approach is very popular, which regards small-loss data as correctly labeled data during training. However, losses are generated on-the-fly based on the model being trained with noisy labels, and thus large-loss data are likely but not certainly to be incorrect. There ...
[ "Xiaobo Xia", "Tongliang Liu", "Bo Han", "Mingming Gong", "Jun Yu", "Gang Niu", "Masashi Sugiyama" ]
[ "Learning with noisy labels", "Sample selection", "Uncertainty" ]
NeurIPS 2021 Submitted
Reject
The submission considers the problem of learning from noisy labels, and propose to incorporate an interval estimate of training loss to the sample selection approach. The intuition is to distinguish between mislabelled data and under-represented data. Theoretical and empirical results shows the promise of the method. ...
3
[{"review_id": "PNpntsRlKhQ", "reviewer": "Reviewer_mZp6", "summary": "The authors discuss potential weakness of previous sample selection criteria and propose new selection criteria based on the uncertainty of losses (not losses themselves). Some theoretical analysis on the criteria is provided. Then, they experimen...
Sample Selection with Uncertainty of Losses for Learning with Noisy Labels Anonymous Author(s) Affiliation Address email Abstract In learning with noisy labels, the sample selection approach is very popular, which regards small-loss data as correctly labeled during training. However, losses are generated on-the-fly bas...
52,080
zyD5AiyLuzG
neurips
2,021
main
NeurIPS.cc/2021/Conference
4,189
The Adaptive Doubly Robust Estimator and a Paradox Concerning Logging Policy
The doubly robust (DR) estimator, which consists of two nuisance parameters, the conditional mean outcome and the logging policy (the probability of choosing an action), is crucial in causal inference. This paper proposes a DR estimator for dependent samples obtained from adaptive experiments. To obtain an asymptotical...
[ "Masahiro Kato", "Kenichiro McAlinn", "Shota Yasui" ]
[ "Doubly robust estimator", "Double/debiased machine learning", "Causal inference", "Semiparametric efficiency", "Dependent samples", "Adaptive experiments" ]
NeurIPS 2021 Poster
Accept (Poster)
The paper studies off policy estimation using data collected by a changing (adaptive) and unknown logging policy. This is an important and under-explored setting, since adaptive experimentation is increasingly common in many application domains (e.g., online platforms). The paper gives an adaptive doubly-robust estimat...
3
[{"review_id": "mup5z2JydvU", "reviewer": "Reviewer_sEtk", "summary": "The paper studies off-policy evaluation from data generated from unknown and adaptive logging policies (logging policy can be updated based on past observations). The paper modified the doubly robust estimator in the non-adaptive and known-logging-p...
The Adaptive Doubly Robust Estimator and a Paradox Concerning Logging Policy Masahiro Kato* CyberAgent, Inc. Kenichiro McAlinn Shota Yasui CyberAgent, Inc. Temple University Abstract The doubly robust (DR) estimator, which consists of two nuisance parameters, the conditional mean outcome and the logging policy (the pro...
53,367
z5-chidgZU3
neurips
2,021
main
NeurIPS.cc/2021/Conference
9,442
Risk Monotonicity in Statistical Learning
Acquisition of data is a difficult task in many applications of machine learning, and it is only natural that one hopes and expects the population risk to decrease (better performance) monotonically with increasing data points. It turns out, somewhat surprisingly, that this is not the case even for the most standard al...
[ "Zakaria Mhammedi" ]
[ "Statistical Learning", "Risk Monotonicity", "Concentration Inequalities", "PAC-Bayesian Bounds" ]
NeurIPS 2021 Oral
Accept (Oral)
This work makes a novel, surprising, and significant contribution to the area of risk monotonicity, a topic of great interest to the machine learning community. A key consequence of the authors results are a positive resolution of one of the open problems in Viering et al.'s COLT 2019 open problem paper, which is the q...
4
[{"review_id": "k6qUJdZRoxA", "reviewer": "Reviewer_qjbn", "summary": "The paper studies the problem of risk monotonicity, that is, the property that the population loss decreases monotonically with increasing data points. The paper proposes an algorithm that converts any consistent base algorithm into a risk-monotonic...
Risk-Monotonicity in Statistical Learning Zakaria Mhammedi" Massachusetts Institute of Technology mhanmedi @mi .ddu Abstract Acquisition of data is difficult task in many applications of machine learning, and it is only natural that one hopes and expects the population risk to decrease (better performance) monotonicall...
49,002
zlhpIYub2d0
neurips
2,021
main
NeurIPS.cc/2021/Conference
8,794
Variational Automatic Curriculum Learning for Sparse-Reward Cooperative Multi-Agent Problems
We introduce an automatic curriculum algorithm, Variational Automatic Curriculum Learning (VACL), for solving challenging goal-conditioned cooperative multi-agent reinforcement learning problems. We motivate our curriculum learning paradigm through a variational perspective, where the learning objective can be decompos...
[ "Jiayu Chen", "Yuanxin Zhang", "Yuanfan Xu", "Huimin Ma", "Huazhong Yang", "Jiaming Song", "Yu Wang", "Yi Wu" ]
[ "multi-agent reinforcement learning", "curriculum learning", "variational inference" ]
NeurIPS 2021 Poster
Accept (Poster)
This work introduces a new curriculum learning algorithm for cooperative multi-agent reinforcement learning. All reviewers appreciated the method's novelty and that the paper was well-written. All found the theoretical motivation convincing and insightful. There were some initial concerns about the experimental results...
3
[{"review_id": "mVfWW-Lirdt", "reviewer": "Reviewer_9Js4", "summary": "This paper proposed a variational automatic curriculum learning framework for solving goal-conditioned cooperative multiagent reinforcement learning problems. In this framework, the learning objective is decomposed into two parts: policy learning on...
Variational Automatic Curriculum Learning for Sparse-Rewand Cooperative Multi-Agent Problems Jiayu Chen 1f Yuanxin Zhang', Yuanfan Xu', Huimin Ma², Huazhong Yang', Jiaming Song", Yu Wang', Yi Wu 12: Tsinghua University, Shanghai Qi Zhi Institute, University Science and Technology Beijing, Stanford University, "j1a76816...
45,202
zQvxc8ul2rR
neurips
2,021
main
NeurIPS.cc/2021/Conference
6,500
Stabilizing Deep Q-Learning with ConvNets and Vision Transformers under Data Augmentation
While agents trained by Reinforcement Learning (RL) can solve increasingly challenging tasks directly from visual observations, generalizing learned skills to novel environments remains very challenging. Extensive use of data augmentation is a promising technique for improving generalization in RL, but it is often foun...
[ "Nicklas Hansen", "Hao Su", "Xiaolong Wang" ]
[ "reinforcement learning", "generalization", "data augmentation" ]
NeurIPS 2021 Poster
Accept (Poster)
This paper investigates the problems related to the high variance in the targets that arise in TD-learning when the network is trained with data augmentation. Thee paper proposes a simple and effective way to address this problem by applying the data augmentation only on the online Q-network not on the target Qs and a ...
5
[{"review_id": "kH_8QIb005l", "reviewer": "Reviewer_UP5u", "summary": "The paper presents a way of improving training stability of DrQ (an image-based RL algorithm that uses data augmentation) by applying image augmentation more carefully. The authors propose to input a mix of augmented and un-augmented observations in...
Stabilizing Deep Q-Learning with ConvNets and Vision Transformers under Data Augmentation Nicklas Hansen' Hao Su Xiaolong Wang' University of California, San Diego nihansen@ucsd.cm edu (haosu, xiwee2)@eng.....mm ucsd. Abstract While agents trained by Reinforcement Learning (RL) can solve increasingly chal- lenging task...
55,140
z36cUrI0jKJ
neurips
2,021
main
NeurIPS.cc/2021/Conference
1,983
On Effective Scheduling of Model-based Reinforcement Learning
Model-based reinforcement learning has attracted wide attention due to its superior sample efficiency. Despite its impressive success so far, it is still unclear how to appropriately schedule the important hyperparameters to achieve adequate performance, such as the real data ratio for policy optimization in Dyna-style...
[ "Hang Lai", "Jian Shen", "Weinan Zhang", "Yimin Huang", "Xing Zhang", "Ruiming Tang", "Yong Yu", "Zhenguo Li" ]
[ "deep reinforcement learning", "model-based reinforcement learning", "automatic hyperparameter scheduling" ]
NeurIPS 2021 Poster
Accept (Poster)
Reviewers agree that the paper is well-motivated, well-written and the the proposed method, motivated by theoretical analysis, shows convincing experimental results on hyperparameter tuning of model-based RL, a hard yet important problem to address. On the other hand, minor concerns remain mainly about whether the theo...
3
[{"review_id": "ef4d_VUgf1z", "reviewer": "Reviewer_yNuW", "summary": "The paper is about learning a hyper-parameter setter for some of the hyper-parameters used to train a model-based reinforcement learning algorithm. Theoretical results are provided to show that there exists an optimal ratio of real-data to model-gen...
On Effective Scheduling of Model-based Reinforcement Learning Hang Lai Jian Shen'!, Weinan Zhang?/, Yimin Huang', Xing Zhang", Ruiming Tang". Yong Yu Zhenguo Li Shanghai Jiao Tong University, HHuawei Noah's Ark Lab (laihang, wnzhang) (aaegx.)aappepaaaa. ajtu.eduu edu.cn Abstract Model-based reinforcement learning has a...
42,748
z3tlL2MeTK2
neurips
2,021
main
NeurIPS.cc/2021/Conference
10,135
Nested Counterfactual Identification from Arbitrary Surrogate Experiments
The Ladder of Causation describes three qualitatively different types of activities an agent may be interested in engaging in, namely, seeing (observational), doing (interventional), and imagining (counterfactual) (Pearl and Mackenzie, 2018). The inferential challenge imposed by the causal hierarchy is that data is col...
[ "Juan D. Correa", "Sanghack Lee", "Elias Bareinboim" ]
[ "counterfactuals", "identification", "causal inference", "experimental data" ]
NeurIPS 2021 Poster
Accept (Poster)
This paper was discussed at length between the AC and SAC, and between the SAC and program chairs. The AC was strongly against publication due to the lack of appropriate engagement with prior work. The SAC and program chairs agree with this concern. The discussion of the relationship with prior work is insufficient ...
5
[{"review_id": "fMDhWQyPF53", "reviewer": "Reviewer_hjGq", "summary": "The authors study the identification of nested counterfactuals from an arbitrary combination of observations and experiments. They prove the counterfactual unnesting theorem, which allows for the mapping of nested to unnested counterfactuals. Then t...
Nested Counterfactual Identification from Arbitrary Surrogate Experiments Juan D. Correa Sanghack Lee Seoul National University sanghack@snu.c ac Elias Bareinboim Columbia University eb0cs. columbia edu Columbia University j jccoreadcs. columbia. edu Abstract The Ladder of Causation describes three qualitatively differ...
41,335
zvTBIFQ43Sd
neurips
2,021
main
NeurIPS.cc/2021/Conference
111
OctField: Hierarchical Implicit Functions for 3D Modeling
Recent advances in localized implicit functions have enabled neural implicit representation to be scalable to large scenes. However, the regular subdivision of 3D space employed by these approaches fails to take into account the sparsity of the surface occupancy and the varying granularities of geometric details. As a ...
[ "Jia-Heng Tang", "Weikai Chen", "Jie Yang", "Bo Wang", "Songrun Liu", "Bo Yang", "Lin Gao" ]
[ "Neural implicit function", "Octree", "3D representation" ]
NeurIPS 2021 Poster
Accept (Poster)
Post rebuttal, the paper was the subject of extensive discussion both between the authors and reviewers and between the reviewers themselves. The reviewers were overall generally in agreement with many facts about the paper, but had good faith disagreements in terms of where to draw boundaries for novelty and contribut...
4
[{"review_id": "msrOKHoKpkz", "reviewer": "Reviewer_Nodw", "summary": "The paper proposes a hierarchical implicit grid representation for modeling implicit shape-spaces. While concurrent research has focused either on hierarchical representations in overfitting mode, or on fixed grids in shape-space mode, the paper pro...
OctField: Hierarchical Implicit Functions for 3D Modeling Jia-Heng Tang*l.2, Weikai , Jie Yang! Wang', Songrun Liu², Bo Yang', and Lin Gao (5z) Beijjng Key Laboratory of Mobile Computing and Pervasive Device, Institute of Computing Technology, Chinese Academy of Sciences 2 University of Chinese Academy of Sciences Tenc...
54,364
yw5KKWraUk7
neurips
2,021
main
NeurIPS.cc/2021/Conference
4,165
Explicable Reward Design for Reinforcement Learning Agents
We study the design of explicable reward functions for a reinforcement learning agent while guaranteeing that an optimal policy induced by the function belongs to a set of target policies. By being explicable, we seek to capture two properties: (a) informativeness so that the rewards speed up the agent's convergence, a...
[ "Rati Devidze", "Goran Radanovic", "Parameswaran Kamalaruban", "Adish Singla" ]
[ "reinforcement learning", "reward design", "explicable reward functions" ]
NeurIPS 2021 Poster
Accept (Poster)
The reviewers agree that the problem of reward design is important and particularly relevant for safe reinforcement learning (RL). It is also a timely subject now that the deployment of RL systems in the real world is becoming increasingly more common. There was also a consensus among the reviewers that the proposed ap...
4
[{"review_id": "zpzF5ovaXCv", "reviewer": "Reviewer_jgko", "summary": "This paper proposes an optimization-based framework for designing reward functions that balance sparsity/interpretability with informativeness. A novel optimization framework is proposed to directly learn these new reward signals given an initial re...
Explicable Reward Design for Reinforcement Learning Agents Rati Devidze' rdevidze mapissss.oiggggg Goran Radanovic' graddnovicemmiisss..or Parameswaran Kamalaruban" kparameswaran (wrriettttrruuuuu Adish Singla' adishs @ mppissssorg Max Planck Institute for Software Systems (MPI-SWS), Saarbrucken, Germany "The Alan Turi...
52,891
yr7nrY18Xu
neurips
2,021
main
NeurIPS.cc/2021/Conference
6,149
Learning and Generalization in RNNs
Simple recurrent neural networks (RNNs) and their more advanced cousins LSTMs etc. have been very successful in sequence modeling. Their theoretical understanding, however, is lacking and has not kept pace with the progress for feedforward networks, where a reasonably complete understanding in the special case of highl...
[ "Abhishek Panigrahi", "Navin Goyal" ]
[ "Recurrent Neural Networks", "Generalization", "Overparametrized Neural Networks", "Optimization" ]
NeurIPS 2021 Poster
Accept (Poster)
This paper proves that sufficiently overparameterized RNNs can learn functions of their input that can be computed by a hidden single layer MLP operating on the entire sequence concatenated together (the size of which is related to the size of the required RNN). If I understand correctly, the main idea is to show that ...
4
[{"review_id": "oAjR6RD4Zdm", "reviewer": "Reviewer_Gmdk", "summary": "In this paper, the authors try to prove that overparametrized RNNs can efficiently learn concept classes consisting of one-hidden-layer neural networks that take the entire sequence of tokens as input. The training algorithm used is SGD with sufficien...
Learning and Generalization in RNNs Abhishek Panigrahi Department Computer Science Navin Goyal Microsoft Research India navingo@micrros crosoft. com Princeton University ap34@princeton.er edu Abstract Simple recurrent neural networks (RNNs) and their more advanced cousins LSTMs etc. have been very successful sequence m...
45,885
ytke6qKpxtr
neurips
2,021
main
NeurIPS.cc/2021/Conference
10,323
STORM+: Fully Adaptive SGD with Recursive Momentum for Nonconvex Optimization
In this work we investigate stochastic non-convex optimization problems where the objective is an expectation over smooth loss functions, and the goal is to find an approximate stationary point. The most popular approach to handling such problems is variance reduction techniques, which are also known to obtain tight co...
[ "Kfir Yehuda Levy", "Ali Kavis", "Volkan Cevher" ]
[ "adaptive methods", "recursive momentum", "nonconvex optimization", "stochastic optimization" ]
NeurIPS 2021 Poster
Accept (Poster)
This paper generated a lot of discussion among reviewers. Overall it seems that the contribution is novel and provides an interesting new learning rate schedule meriting acceptance. However, the claims of optimality in the rates are not well-supported. The extra assumption that the objective is bounde above and below m...
4
[{"review_id": "xHL59eY9HeX", "reviewer": "Reviewer_Ft5i", "summary": "The paper proposes STORM+, which adopts the STORM method with a new way to set the learning rate (parameter free momentum based). The method also obtains the state-of-the-art convergence result for finding an approximate stationary point. ", "questi...
STORM*: Fully Adaptive SGD with Momentum for Nonconvex Optimization Kfir Y. Levy Ali Kavis EPFL ali. kavissieeepfl.co ch Volkan Cevher EPFL Technion kfiryleery@technionac... volkan.ccvhergepfl.. ch Abstract In this work we investigate stochastic non-convex optimization problems where the objective is an expectation ove...
31,797
z-l1kpDXs88
neurips
2,021
main
NeurIPS.cc/2021/Conference
7,474
TokenLearner: Adaptive Space-Time Tokenization for Videos
In this paper, we introduce a novel visual representation learning which relies on a handful of adaptively learned tokens, and which is applicable to both image and video understanding tasks. Instead of relying on hand-designed splitting strategies to obtain visual tokens and processing a large number of densely sample...
[ "Michael S Ryoo", "AJ Piergiovanni", "Anurag Arnab", "Mostafa Dehghani", "Anelia Angelova" ]
[ "visual representation", "transformers", "video understanding" ]
NeurIPS 2021 Poster
Accept (Poster)
After the rebuttal period all reviewers rate this paper as being past the threshold for acceptance. The authors did a good job of addressing questions of novelty during the rebuttal and additional experiments were well received by reviewers. The authors have promised to clean and polish some parts of the manuscript fla...
4
[{"review_id": "sTW566vIapA", "reviewer": "Reviewer_TXug", "summary": "The paper proposes a transformer-based neural network architecture for extracting video representations. Past approaches discretize entire videos into 3D (RGB-T) chunks as input 'tokens' to transformers. Instead in this work, an additional operation...
TokenLearner: Adaptive Space-Time Tokenization for Videos Michael S. Ryoo!!2, AJ Piergiovanni', Anurag Arnab', Mostafa Dehghani', Anelia Angelova Google Research Stony Brook University {mryoo, ajpiergi aarnab, dehghani, dehghani, aangaa))0o..l1111100ogg....... Abstract In this paper, we introduce novel visual represent...
43,075
z-X_PpwaroO
neurips
2,021
main
NeurIPS.cc/2021/Conference
5,432
Computer-Aided Design as Language
Computer-Aided Design (CAD) applications are used in manufacturing to model everything from coffee mugs to sports cars. These programs are complex and require years of training and experience to master. A component of all CAD models particularly difficult to make are the highly structured 2D sketches that lie at the he...
[ "Yaroslav Ganin", "Sergey Bartunov", "Yujia Li", "Ethan Anderson Keller", "Stefano Saliceti" ]
[ "generative models", "structured objects", "computer-aided design", "transformers", "pointers", "program synthesis" ]
NeurIPS 2021 Poster
Accept (Poster)
This paper presents a new approach to generating constrained CAD sketches. The key challenge in this problem is generating constraints that relate all the different strokes in a sketch. At a high level, the main idea in this paper is to treat this as a language generation problem, since the sketches with their constrai...
4
[{"review_id": "lbFvsVPWLT", "reviewer": "Reviewer_K3kn", "summary": "This paper provides a large dataset of structured 2D sketches and proposes transformer-based models capable of both unconditional and conditional generations of such sketches. The authors present a novel representation for structured sketches based o...
Computer-Aided Design as Language Yaroslav Ganin Sergey Bartunoy' Yujia Li Ethan Keller Stefano Saliceti DDeppMind OOnhape Abstract Computer-Aided Design (CAD) applications used in manufacturing model everything from coffee mugs to sports cars. These programs are complex and master. A component of all CAD require years...
42,958
yxHPRAqCqn
neurips
2,021
main
NeurIPS.cc/2021/Conference
10,007
Convergence Rates of Stochastic Gradient Descent under Infinite Noise Variance
Recent studies have provided both empirical and theoretical evidence illustrating that heavy tails can emerge in stochastic gradient descent (SGD) in various scenarios. Such heavy tails potentially result in iterates with diverging variance, which hinders the use of conventional convergence analysis techniques that rel...
[ "Hongjian Wang", "Mert Gurbuzbalaban", "Lingjiong Zhu", "Umut Simsekli", "Murat A Erdogdu" ]
[ "SGD", "heavy tailed noise", "infinite variance", "Polyak-Ruppert averaging" ]
NeurIPS 2021 Poster
Accept (Poster)
This paper examines the convergence of stochastic gradient descent in strongly convex minimization problems. The novelty of the analysis is that the authors do not assume that the variance of the gradient queries is finite; instead, they consider "heavy-tailed" gradient noise models with bounded moments for some $p\in[...
4
[{"review_id": "fLIdkjZKkRq", "reviewer": "Reviewer_2Gbm", "summary": "The authors analyze SGD for a class of strongly-convex functions and under (possibly) infinite noise variance, i.e., the pth moment of the noise exists for $p \\in [1,2)$, but noise variance is not assumed to be bounded. Such noise behavior is defin...
Convergence Rates of Stochastic Gradient Descent under Infinite Noise Variance Hongjaan Wang" Carnegie Mellon University hj juwangecmu. edu Mert Gürbizbalaban Rutgers University mg13660rutgers.om edu Lingjiong Zhu Florida State University zhuCmath. fsu. edu Umut Şimşekli Murat A. Erdogdu INRIA & ENS PSL Research Univer...
45,718
zDtFO9vohmF
neurips
2,021
main
NeurIPS.cc/2021/Conference
9,630
Kernel Functional Optimisation
Traditional methods for kernel selection rely on parametric kernel functions or a combination thereof and although the kernel hyperparameters are tuned, these methods often provide sub-optimal results due to the limitations induced by the parametric forms. In this paper, we propose a novel formulation for kernel select...
[ "Arun Kumar Anjanapura Venkatesh", "Alistair Shilton", "Santu Rana", "Sunil Gupta", "Svetha Venkatesh" ]
[ "Non-parametric kernels", "Kernel learning", "Bayesian functional optimisation", "Hyperkernels", "Kernel machines", "Gaussian Process", "Hyperparameter tuning", "Machine learning", "Support Vector Machines", "Reproducing Kernel Hilbert Spaces", "Black-box optimisation", "sample-efficient optim...
NeurIPS 2021 Poster
Accept (Poster)
The paper proposes a zero-order optimization method where the optimized variable is a kernel function in Hyper-RKHS induced by a selected hyper-kernel. The algorithm is applied to the optimization of kernel functions for C-SVM and GP regression. Experiments show significant improvement over existing methods. The review...
5
[{"review_id": "lpzP9SzcWs3", "reviewer": "Reviewer_ngk1", "summary": "Paper presents a novel framework for Bayesian learning of kernels using hyperkernels. It is able to address a broader set of kernels that are stationary and non-stationary, learn them efficiently and show state-of-the-art results on synthetic and re...
Kernel Functional Optimisation Arun Kumar A V, Alistair Shilton, Santu Rana, Sunil Gupta, Svetha Venkatesh Applied Artificial Intelligence Institute (A (A²12), Deakin University Waurn Ponds, Geelong, Australia jaapjana,,,, alistair. shilton, santu. rantu.rana, gupta, svetha. Bankateshh@dealle edu.au Abstract Traditiona...
46,088
z1F9G4VnGZ-
neurips
2,021
main
NeurIPS.cc/2021/Conference
5,385
CentripetalText: An Efficient Text Instance Representation for Scene Text Detection
Scene text detection remains a grand challenge due to the variation in text curvatures, orientations, and aspect ratios. One of the hardest problems in this task is how to represent text instances of arbitrary shapes. Although many methods have been proposed to model irregular texts in a flexible manner, most of them l...
[ "Tao Sheng", "Jie Chen", "Zhouhui Lian" ]
[ "Scene Text Detection", "Scene Text Recognition", "Text Instance Representation" ]
NeurIPS 2021 Poster
Accept (Poster)
This paper presents a new method for scene text detection and recognition based on the integration of individual local responses and their centripedal shifts. The paper has received 4 expert reviews, which were quite positive, with in particular one reviewer championing the paper. While the reviewers (and the AC) agre...
4
[{"review_id": "pZ1WUyWsxXp", "reviewer": "Reviewer_qKXD", "summary": "The paper deals with the problem of scene text detection, focusing on irregular text.\nThe paper proposes a segmentation approach, in line with recent methods that deal with irregular text detection.\nThe Main contributions of the paper are:\n- A de...
CentripetalText: An Efficient Text Instance Representation for Scene Text Detection Tao Sheng, Jie Chen, Zhouhui Lian' Wangxuan Institute of Computer Technology Peking University. Beijing, China {shengtao, jiechen01, Lianzhouhui)/pkuu edu Abstract Scene text detection remains a grand challenge due to the variation in t...
40,370
yxg-i8DAHK
neurips
2,021
main
NeurIPS.cc/2021/Conference
407
Stabilizing Dynamical Systems via Policy Gradient Methods
Stabilizing an unknown control system is one of the most fundamental problems in control systems engineering. In this paper, we provide a simple, model-free algorithm for stabilizing fully observed dynamical systems. While model-free methods have become increasingly popular in practice due to their simplicity and fle...
[ "Juan Carlos Perdomo", "Jack Umenberger", "Max Simchowitz" ]
[ "control theory", "stability", "LQR", "policy gradients" ]
NeurIPS 2021 Poster
Accept (Poster)
All reviewers find an interesting and important idea in this paper. Although some reviewers point out that the similar idea is found in some existing works, the authors clarify the significant difference or their work from those. As a total, this is a nice paper that could appear in NeurIPS. Therefore, my recommendatio...
3
[{"review_id": "uRPSVuxg10X", "reviewer": "Reviewer_uRW9", "summary": "This paper proposes a model-free algorithm for stabilizing linear and nonlinear dynamical systems. The algorithm solves a series of discounted LQR problems using policy gradient, and provably converges to a stabilizing controller for LTI systems; fo...
Stabilizing Dynamical Systems via Policy Gradient Methods Juan C. Perdomo" University of California, Berkeley Jack Umenberger MIT Max Simchowitz MIT Abstract Stabilizing an unknown control system one the most fundamental problems in control systems engineering. In this paper, we provide simple, model-free algorithm for...
45,431
yrqn9rQO2YT
neurips
2,021
main
NeurIPS.cc/2021/Conference
5,631
Instance-Dependent Bounds for Zeroth-order Lipschitz Optimization with Error Certificates
We study the problem of zeroth-order (black-box) optimization of a Lipschitz function $f$ defined on a compact subset $\mathcal{X}$ of $\mathbb{R}^d$, with the additional constraint that algorithms must certify the accuracy of their recommendations. We characterize the optimal number of evaluations of any Lipschitz fun...
[ "Francois Bachoc", "Tommaso Cesari", "Sébastien Gerchinovitz" ]
[ "Lipschitz optimization", "packing bound", "function-dependent bound", "Piyavskii-Shubert algorithm" ]
NeurIPS 2021 Poster
Accept (Poster)
This paper considers the zeroth-order optimization problem with error certificates for Lipschitz functions. It generalized existing work to higher dimensions and showed that the algorithm is certifiable. The technical contribution is solid. I am happy to recommend acceptance, but I also note that the reviewers have poi...
7
[{"review_id": "m7SRTZwxyiN", "reviewer": "Reviewer_QEpC", "summary": "This paper studied the problem of zeroth-order Lipschitz optimization, and mainly showed that under mild geometric assumptions and a noise-free zeroth-order oracle, the sample complexity of finding the \\varepsilon-optimal point for a function f is ...
Instance-Dependent Bounds for Zeroth-order Lipschitz Optimization with Error Certificates François Bachoc Institut de Mathématiques de Toulouse & University Paul Sabatier francois bachocemath. univ-toulouse. Tommaso Cesari Toulouse School of Economics Sébastien Gerchinovitz IRT Saint Exupéry Institut de Mathématiques d...
46,960
ympqhd5gE9
neurips
2,021
main
NeurIPS.cc/2021/Conference
2,029
Single Layer Predictive Normalized Maximum Likelihood for Out-of-Distribution Detection
Detecting out-of-distribution (OOD) samples is vital for developing machine learning based models for critical safety systems. Common approaches for OOD detection assume access to some OOD samples during training which may not be available in a real-life scenario. Instead, we utilize the {\em predictive normalized maxi...
[ "Koby Bibas", "Meir Feder", "Tal Hassner" ]
[ "out-of-distribution", "normalized maximum likelihood", "information theory", "deep neural network", "regret", "generalization error" ]
NeurIPS 2021 Poster
Accept (Poster)
The paper proposes to use predictive normalized maximum likelihood (pNML) for detecting out-of-distribution inputs. Overall, the reviewers found it be a well-written paper. The idea of pNML for OOD detection is novel and the empirical results show consistent improvements over baseline. The authors did a good job of ad...
5
[{"review_id": "w8CezmNVa6C", "reviewer": "Reviewer_A4Wx", "summary": "The paper analytically derived the pNML regret for the single layer neural network, and apply it to any pre-trained NN by treating the layers as feature extractor and the last layer as a single layer NN for classification. Experimental results show ...
Single Layer Predictive Normalized Maximum Likelihood for Out-of-Distribution Detection Koby Bibas Meir Feder School of Electrical Engineering School of Electrical Engineering Tel Aviv University Tel Aviv University meirdeng. tau. ac il kobybibas@gmaill Tal Hassner Facebook AI talhassner@gmaile @gwaaineeegmmaal..ommmm ...
41,214
yqj6q_eNTJd
neurips
2,021
main
NeurIPS.cc/2021/Conference
10,758
ActCooLR – High-Level Learning Rate Schedules using Activation Pattern Temperature
We consider the aspect of learning rate (LR-)scheduling in neural networks, which often significantly affects achievable training time and generalization performance. Although schedules such as 1-cycle offer substantial gains over base-line methods, the effect of LR-curves on the training process is not very well under...
[ "David Hartmann", "Sebastian Brodehl", "Michael Wand" ]
[ "deep learning", "learning rate schedules", "information theory", "activation pattern temperature" ]
NeurIPS 2021 Submitted
Reject
In this paper the authors study the rate of sign flips of the RELU units (or more precisely their inputs) during training, and design a learning rate schedule which tries to achieve a linear decrease in this rate over iterations (which was seen empirically for other well-performing LR schedules). The reviewers found t...
4
[{"review_id": "hSj6Rq2Wlf", "reviewer": "Reviewer_yjMf", "summary": "This papers aims to explore the impact of learning rate schedules in neural network training, through a newly proposed measure denoted the “activation pattern temperature” which examines the probability that ReLU activations will change between the...
ActCooLR High-Level Learning Rate Schedules using A ctiivation Pattern Temperature Anonymous Author(s) Affiliation Address email Abstract We consider the aspect of learning rate (LR-)scheduling in neural networks, which often significantly affects achievable training time and generalization performance. Although schedu...
56,879
yn267zYn8Eg
neurips
2,021
main
NeurIPS.cc/2021/Conference
2,601
Analogous to Evolutionary Algorithm: Designing a Unified Sequence Model
Inspired by biological evolution, we explain the rationality of Vision Transformer by analogy with the proven practical Evolutionary Algorithm (EA) and derive that both of them have consistent mathematical representation. Analogous to the dynamic local population in EA, we improve the existing transformer structure and...
[ "Jiangning Zhang", "Chao Xu", "Jian Li", "Wenzhou Chen", "Yabiao Wang", "Ying Tai", "Shuo Chen", "Chengjie Wang", "Feiyue Huang", "Yong Liu" ]
[ "Evolutionary Algorithm", "Vision Transformer", "Spatial-Filling Curve", "ImageNet Classification" ]
NeurIPS 2021 Poster
Accept (Poster)
This paper provides a view on the Transformer architecture as being related to that of an Evolutionary Algorithm. Based on this, the authors propose some improvements to the architecture and show convincing empirical results to demonstrate their efficacy and efficiency. The reviewers and discussion subsequent to the re...
4
[{"review_id": "qdpmwjytagJ", "reviewer": "Reviewer_EQ9b", "summary": "This paper presents a novel perspective to explains the rationality of vision transformer by analogy with a heuristic evolutionary algorithm and proposes an improved EAT model borrowing the dynamic local population concept from EA. Authors introduce...
Analogous to Evolutionary Algorithm: Designing a Unified Sequence Model Jiangning Zhang Chao Xu Jian Li Wenzhou Chen Yabiao Wang Yong Liu Ying Tai Shuo Chen Chengjie Wang Feiyue Huang? 1 APRIL Lab, Zhejiang University YYoutu Lab, Tencent RRIKEN Center for Advanced Intelligence Project (186368, 21832066, wenzhouchen)@z....
51,684
yuCiAWddUFq
neurips
2,021
main
NeurIPS.cc/2021/Conference
11,143
Distribution-free inference for regression: discrete, continuous, and in between
In data analysis problems where we are not able to rely on distributional assumptions, what types of inference guarantees can still be obtained? Many popular methods, such as holdout methods, cross-validation methods, and conformal prediction, are able to provide distribution-free guarantees for predictive inference, b...
[ "Yonghoon Lee", "Rina Barber" ]
[ "distribution-free inference", "nonparametric inference" ]
NeurIPS 2021 Poster
Accept (Poster)
There is a consensus among Reviewers and Area Chair that the problem is interesting and worth pursuing further. There are in two issues that make the paper borderline and not quite suitable for publication in its current form: 1. An end-to-end analysis to illustrate Theorem 3 is missing, for instance in a reasonable ...
4
[{"review_id": "hHLwvdTdKIx", "reviewer": "Reviewer_UaZ8", "summary": "This paper extends the lower bound results of Barber [2020] and Gupta et al. [2021] to a larger class of distributions over (X, Y). The earlier results applied for distributions P having P_X (or P_{f(X)} as in Gupta et al.'s result) nonatomic. This ...
Distribution-free inference for regression: discrete, continuous, and in between Yonghoon Lee Department of Statistics University of Chicago Chicago, IL 60637 yhoony31@uchis hatesteechhcaaae cago. edu Rina Foygel Barber Department of Statistics University of Chicago Chicago, IL 60637 rina@uchicago edu Abstract In data ...
37,361
yxsak5ND2pA
neurips
2,021
main
NeurIPS.cc/2021/Conference
4,357
Smooth Normalizing Flows
Normalizing flows are a promising tool for modeling probability distributions in physical systems. While state-of-the-art flows accurately approximate distributions and energies, applications in physics additionally require smooth energies to compute forces and higher-order derivatives. Furthermore, such densities are ...
[ "Jonas Köhler", "Andreas Krämer", "Frank Noe" ]
[ "Normalizing Flows", "Boltzmann Generators", "Smoothness", "Backpropagation through Root-Finding" ]
NeurIPS 2021 Poster
Accept (Poster)
The paper introduces a flow architecture referred to as smooth normalizing flows. These are C^K-smooth maps that work on compact intervals and hypertori. One of the main contributions is to propose a smooth transformation on the unit-interval. The paper is overall well-written and technically sound. Conceptually, the...
4
[{"review_id": "qNvW5ITiN_", "reviewer": "Reviewer_QQCj", "summary": "This paper is proposing smooth normalizing flows for modeling probability distribution in physical systems. This is because the existing flow-based model approximates distributions and energies by computing forces and higher-order derivatives based ...
Smooth Normalizing Flows Jonas Köhler*T Andreas Krämer* Frank Noé 838 Department of Mathematics and Computer Science, Freie Universität Berlin Department of Physics, Freie Universität Berlin Department of Chemistry. Rice University, Houston, TX koehler, andreas kjonas.oohler, . frank.noe Abstract Normalizing flows prom...
47,044
ys6L_NWchCp
neurips
2,021
main
NeurIPS.cc/2021/Conference
378
Large-Scale Unsupervised Object Discovery
Existing approaches to unsupervised object discovery (UOD) do not scale up to large datasets without approximations that compromise their performance. We propose a novel formulation of UOD as a ranking problem, amenable to the arsenal of distributed methods available for eigenvalue problems and link analysis. Through t...
[ "Huy V. Vo", "Elena Sizikova", "Cordelia Schmid", "Patrick Perez", "Jean Ponce" ]
[ "large-scale", "unsupervised", "object discovery", "ranking", "distributed", "parallel computation", "graph centrality" ]
NeurIPS 2021 Poster
Accept (Poster)
The formulation of unsupervised object detection as a PageRank problem is neat, and allows to scale up to significantly larger datasets. The paper is accepted, but authors are encouraged to better motivate the use of supervised features (and/or highlight the experiments with self-supervised features more).
4
[{"review_id": "n3QpicMVjl0", "reviewer": "Reviewer_1AuG", "summary": "This is a very interesting paper. It builds upon object discovery formulation by [Vo et al., CVPR’19; ECCV’20], which formulates the discovery as (co-)detection of frequently occurring patterns within an image collection. This approach assumes input...
Large-Scale Unsupervised Object Discovery Huy V. Vo Elena Sizikova' Cordelia Schmid' Patrick Pérez² Jean Ponce! 1. INRIA, Département d' d'informatique de I'ENS, ENS, CNRS, PSL University, Paris, France *Center for Data Science, New York University a {van-huy vo, cordelia. jean. jean.ponce) ponce)einria..fr es5223dnyu....
62,076
yewqeLly5D8
neurips
2,021
main
NeurIPS.cc/2021/Conference
5,519
Topic Modeling Revisited: A Document Graph-based Neural Network Perspective
Most topic modeling approaches are based on the bag-of-words assumption, where each word is required to be conditionally independent in the same document. As a result, both of the generative story and the topic formulation have totally ignored the semantic dependency among words, which is important for improving the se...
[ "Dazhong Shen", "Chuan Qin", "Chao Wang", "Zheng Dong", "Hengshu Zhu", "Hui Xiong" ]
[ "Topic Model", "Neural Variation Inference", "Graph Neural Network" ]
NeurIPS 2021 Poster
Accept (Poster)
Reviews have positive comments about the family of topic models explored here, which uses graph-neural-networks to remove the word independence assumptions in classical models like LDA. Some reviewers raised questions about experimental comparisons to related work, that were reasonably addressed by the author response...
4
[{"review_id": "bV04QEFKTVp", "reviewer": "Reviewer_7TJJ", "summary": "This work is based on neural topic modeling where a document is represented by a graph of word notes with edges representing the semantic dependency and the generative process of learning is a function of both the document-graph structure (i.e., wor...
Topic Modeling Revisited: A Document Graph-based Neural Network Perspective Dazhong Shen 1.2 Chuan Qin', Chao Wang 1.2 Zheng Dong", Hengshu Zhu Hui Xiong School of Computer Science and Technology, University of Science and Technology of China Baidu Talent Intelligence Center, Baidu Inc. Artificial Intelligence Thrust, ...
46,272
yrmvFIh_e5o
neurips
2,021
main
NeurIPS.cc/2021/Conference
1,005
$\texttt{LeadCache}$: Regret-Optimal Caching in Networks
We consider an online prediction problem in the context of network caching. Assume that multiple users are connected to several caches via a bipartite network. At any time slot, each user may request an arbitrary file chosen from a large catalog. A user's request at a slot is met if the requested file is cached in at l...
[ "Debjit Paria", "Abhishek Sinha" ]
[ "Online learning", "Network Caching", "Regret Lower Bounds" ]
NeurIPS 2021 Poster
Accept (Poster)
Overall, this is a quality submission. The paper addresses an important problem of interest to the community. They were given new algorithmic and analysis insights into bounding regret for this problem. The reviewers felt the paper was written well and would be of interest.
4
[{"review_id": "ov3dUrxf0a0", "reviewer": "Reviewer_txkX", "summary": "This paper develops online algorithms for a bipartite caching problem. The problem is considered in an adversarial setting and the regret of the proposed algorithm is analyzed. Since the underlying problem is combinatorial, an intermediate step is n...
LeadCache: Regret-Optimal Caching in Networks Debjit Paria' Abhishek Sinha Department of Computer Science Chennai Mathematical Institute Chennai 603103, India com Department of Electrical Engineering Indian Institute of Technology Madras Chennai 600036, India abhishek sinha@ee..eeeeeeeee .itm.. ac Abstract We consider ...
44,055
yaxePRTOhqk
neurips
2,021
main
NeurIPS.cc/2021/Conference
1,297
Stability and Deviation Optimal Risk Bounds with Convergence Rate $O(1/n)$
The sharpest known high probability generalization bounds for uniformly stable algorithms (Feldman, Vondrak, NeurIPS 2018, COLT, 2019), (Bousquet, Klochkov, Zhivotovskiy, COLT, 2020) contain a generally inevitable sampling error term of order $\Theta(1/\sqrt{n})$. When applied to excess risk bounds, this leads to subop...
[ "Yegor Klochkov", "Nikita Zhivotovskiy" ]
[ "algorithmic stability", "generalization bounds", "concentration inequalities", "stochastic convex optimization" ]
NeurIPS 2021 Oral
Accept (Oral)
The reviewers are unanimous that this work provides a significant contribution in the area of uniform stability, which should interest the learning theory community as well as the wider NeurIPS community. A key consequence of the authors' results is the positive resolution of a long-standing open question of Shalev-Shw...
4
[{"review_id": "x7BqHnLrYha", "reviewer": "Reviewer_Bxnk", "summary": "This paper presents high probability bounds of O(1/n) for uniformly stable algorithms. The authors assume the generalized Bernstein condition due to Koltchinskii [26] as an additional assumption compared to related works in the literature. Under thi...
Stability and Deviation Optimal Risk Bounds with Convergence Rate O(1/n) Yegor Klochkov Nikita Zhivotovskiy Department of Mathematics Cambridge-INET, Faculty of Economics University of Cambridge yk3760cam. uc.uk ETH, Zürich wikita.zhivotovvsk.e @math. ethz. ch Abstract The sharpest known high probability generalization...
39,506
zO6Q8q2AmbV
neurips
2,021
main
NeurIPS.cc/2021/Conference
7,173
On Component Interactions in Two-Stage Recommender Systems
Thanks to their scalability, two-stage recommenders are used by many of today's largest online platforms, including YouTube, LinkedIn, and Pinterest. These systems produce recommendations in two steps: (i) multiple nominators—tuned for low prediction latency—preselect a small subset of candidates from the whole item po...
[ "Jiri Hron", "Karl Krauth", "Michael Jordan", "Niki Kilbertus" ]
[ "recommender systems", "mixture of experts", "bandits", "scalability" ]
NeurIPS 2021 Poster
Accept (Poster)
Two-stage recommenders are critically important in production recommender systems, yet as the authors and reviewers point out, there has been very little theoretical analysis of this problem. While the reviewers do reasonably question whether bandit-based analysis is really the right theoretical framework, the author ...
4
[{"review_id": "yIRfYAwuK6t", "reviewer": "Reviewer_GChu", "summary": "This paper studies on the properties of two-stage recommendation architecture. Both empirical and theoretical analyses have been conducted to reveal the impact of pool allocation on recommendation performance. The authors further propose a MOE-based...
On component interactions in two-stage recommender systems Jiri Hron Karl Krauth UC Berkeley Michael I. Jordan UC Berkeley University of Cambridge Niki Kilbertus Technical University of Munich Helmholtz AI, Munich Abstract Thanks to their scalability, two-stage recommenders used by many of today's largest online platfo...
54,920
ycmcCSoNBx8
neurips
2,021
main
NeurIPS.cc/2021/Conference
2,530
Pareto-Optimal Learning-Augmented Algorithms for Online Conversion Problems
This paper leverages machine-learned predictions to design competitive algorithms for online conversion problems with the goal of improving the competitive ratio when predictions are accurate (i.e., consistency), while also guaranteeing a worst-case competitive ratio regardless of the prediction quality (i.e., robustne...
[ "Bo Sun", "Russell Lee", "Mohammad Hajiesmaili", "Adam Wierman", "Danny Tsang" ]
[ "learning-augmented algorithm", "robustness", "consistency", "Pareto-optimality", "online conversion problem" ]
NeurIPS 2021 Poster
Accept (Poster)
This paper provides algorithms augmented machine-learned predictions for online conversion problems. The goal is to improve the competitive ratio when predictions are accurate ("consistency"), while also guaranteeing a worst-case competitive ratio regardless of the prediction quality ("robustness"). The authors show th...
4
[{"review_id": "rImFriTJ5xm", "reviewer": "Reviewer_BJtR", "summary": "This paper provides learning-augmented algorithms for online conversion problems. Suppose you have $1 and want to convert it to a different currency where the exchange rate varies over time. The ideal would be to do the conversion when the exchange ...
Pareto-Optimal Learning-Augmented Algorithms for Online Conversion Problems Bo Sun Russell Lee CICS, UMass Amherst rcleeêcs. umass. edu ECE, HKUST bsunaadconnect. ust.hk Mohammad Hajiesmaili CICS, UMass Amherst hajiesmailie @cs. seass.edu Adam Wierman CMS, Caltech Danny H.K. Tsang ECE, HKUST eetsang@ust. adam@caatecch....
45,437
yWd42CWN3c
neurips
2,021
main
NeurIPS.cc/2021/Conference
2,109
Combining Recurrent, Convolutional, and Continuous-time Models with Linear State Space Layers
Recurrent neural networks (RNNs), temporal convolutions, and neural differential equations (NDEs) are popular families of deep learning models for time-series data, each with unique strengths and tradeoffs in modeling power and computational efficiency. We introduce a simple sequence model inspired by control systems ...
[ "Albert Gu", "Isys Johnson", "Karan Goel", "Khaled Kamal Saab", "Tri Dao", "Atri Rudra", "Christopher Re" ]
[ "recurrent neural networks", "convolutions", "continuous time models", "sequence models", "state space", "time series", "deep learning architecture" ]
NeurIPS 2021 Poster
Accept (Poster)
The authors propose a continuous time model combining recurrent and convolutional structures. Overall, reviewers are supportive of the paper. The main remaining concerns, after discussion, are mostly with respect to the presentation. It was felt that the paper is dense, heavily relying on the appendix, and could be mor...
4
[{"review_id": "wxVZbmDIaQc", "reviewer": "Reviewer_1sLL", "summary": "This paper proposes a sequence-to-sequence model whose main building block is a linear layer called LSSL based on a standard dynamical system in control theory, including an internal ODE. LSSL is shown to implement both a convolutional and recurrent...
Combining Recurrent, Convolutional, and Continuous-time Models with Linear State-Space Layers Albert Gul, Isys Johnson', Karan Goel', Khaled Saab", Tri A Atri Rudra', Christopher Ré Department of Computer Science, Stanford University Department of Electrical Engineering, Stanford University Department of Computer Scien...
52,221
ykN3tbJ0qmX
neurips
2,021
main
NeurIPS.cc/2021/Conference
8,882
Collapsed Variational Bounds for Bayesian Neural Networks
Recent interest in learning large variational Bayesian Neural Networks (BNNs) has been partly hampered by poor predictive performance caused by underfitting, and their performance is known to be very sensitive to the prior over weights. Current practice often fixes the prior parameters to standard values or tunes them ...
[ "Marcin B. Tomczak", "Siddharth Swaroop", "Andrew Y. K. Foong", "Richard E Turner" ]
[ "Bayesian Neural Networks", "BNNs", "Bayesian Deep Learning", "variational inference", "VI", "underfitting", "collapsed ELBO", "overpruning" ]
NeurIPS 2021 Poster
Accept (Poster)
This paper tightens the standard variational bound optimized in Bayesian Deep Learning by drawing on collapsed variational inference. The authors consider the prior parameters as latent variables and derive a hierarchical variational inference procedure in which the top-level latent variables are marginalized out. The ...
4
[{"review_id": "sr-HRdpKVi", "reviewer": "Reviewer_rzGC", "summary": "Although marginal likelihood can be used to select hyper-parameters, the ELBO is generally regarded as too weak a bound to do this effectively.\nThe authors propose applying collapsed variational bounds to variational inference of BNNs.\nThey show wh...
50 3 although there Collapsed Variational Bounds for Bayesian Neural Networks Marcin Tomczak, Siddharth Swaroop, Andrew Y. Foong, Richard E. Turner University of Cambridge Cambridge, UK sss163,,221,,,yp ret26 ret.2)) ac. Abstract Recent interest in learning large variational Bayesian Neural Networks (BNNs) has been par...
53,155
yaksQCYcRs
neurips
2,021
main
NeurIPS.cc/2021/Conference
10,989
Neural Program Generation Modulo Static Analysis
State-of-the-art neural models of source code tend to be evaluated on the generation of individual expressions and lines of code, and commonly fail on long-horizon tasks such as the generation of entire method bodies. We propose to address this deficiency using weak supervision from a static program analyzer. Our neuro...
[ "Rohan Mukherjee", "Yeming Wen", "Dipak Chaudhari", "Thomas Reps", "Swarat Chaudhuri", "Chris Jermaine" ]
[ "Program generation", "neurosymbolic learning", "attribute grammars", "program synthesis" ]
NeurIPS 2021 Spotlight
Accept (Spotlight)
The paper is well written and clearly motivated, with good results outperforming the most relevant baselines. All reviewers agree this is a good paper and should be accepted. There are, however, some comments shared by the reviewers that the authors should take into account to improve this work. Most notably the tra...
4
[{"review_id": "kb_la7dzdSF", "reviewer": "Reviewer_spNu", "summary": "Leveraging compiler inferences about code, such as from static analyzers, can improve the ability of neural models to reason about code. This paper introduces a method that relies on attribute grammar coupled with static analysis that operates on pa...
Neural Program Generation Modulo Static Analysis Rohan Mukherjee Rice University Yeming Wen UT Austin Dipak Chaudhari UT Austin Thomas W. Reps University of Wisconsin Swarat Chaudhuri Chris Jermaine UT Austin Rice University Abstract State-of-the-aae neural models of source code tend to be evaluated on the generation o...
50,838
ybw2U70q_Vd
neurips
2,021
main
NeurIPS.cc/2021/Conference
1,018
End-to-end Multi-modal Video Temporal Grounding
We address the problem of text-guided video temporal grounding, which aims to identify the time interval of a certain event based on a natural language description. Different from most existing methods that only consider RGB images as visual features, we propose a multi-modal framework to extract complementary informat...
[ "Yi-Wen Chen", "Yi-Hsuan Tsai", "Ming-Hsuan Yang" ]
[ "Computer Vision", "Vision and Language", "Video Temporal Grounding", "Multi-modal Learning", "Transformer", "Contrastive Learning" ]
NeurIPS 2021 Poster
Accept (Poster)
This paper introduces an approach for multi-modal video temporal grounding. The paper received diverging review scores (4, 6, 6, 7), with the positive reviewer championing the paper. All reviewers agree that the approach is somewhat incremental, but it introduces as one of the first the use of depth information. The re...
4
[{"review_id": "WdutDJokR5", "reviewer": "Reviewer_wvV3", "summary": "A method for the video temporal grounding in the multi-modal domain (RGB, flow, depth). To this end, the transformer-based co-attention (+ adaptive fusion) and contrastive learning techniques are developed.\n", "questions": "", "limitations": "", "ra...
End-to-end Multi-modal Video Temporal Grounding Yi-Wen Chen' Universityy of California, Merced Yi-Hsuan Tsai Ming- Ming-Hsuan Hsuan Yang!.3. Phiar 3 Yonsei University *Google Research Abstract We address the problem text-guided video temporal grounding, which aims to identify the time interval of certain event based on...
42,958
yRfsADObu18
neurips
2,021
main
NeurIPS.cc/2021/Conference
5,967
Gradient Driven Rewards to Guarantee Fairness in Collaborative Machine Learning
In collaborative machine learning(CML), multiple agents pool their resources(e.g., data) together for a common learning task. In realistic CML settings where the agents are self-interested and not altruistic, they may be unwilling to share data or model information without adequate rewards. Furthermore, as the data/mod...
[ "Xinyi Xu", "Lingjuan Lyu", "Xingjun Ma", "Chenglin Miao", "Chuan-Sheng Foo", "Bryan Kian Hsiang Low" ]
[ "Federated learning", "Shapley value", "Fairness", "Incentives" ]
NeurIPS 2021 Poster
Accept (Poster)
This paper develops an interesting measure for user contribution in collaborative machine learning and an associated compensation mechanism. In general, the reviewers are positive about this paper. The discussion has resolved several issues raised by the reviewers. Please update the paper to clarify the issues in the ...
4
[{"review_id": "qxvK2WO8A6", "reviewer": "Reviewer_P3C7", "summary": "This paper proposed an interesting algorithm that aims for collaborative fairness using gradient information (cosine gradient shapley). Overall the paper is well-written and well-motivated. My main concerns are two folded: (1) The current empirical s...
Gradient-Driven Rewards to Guarantee Fairness in Collaborative Machine Learning Xinyi Xu Lingjuan Lyu²r Xingjun Ma², Chenglin Miao", Chuan Sheng Food, and Bryan Kian Hsiang Low Department of Computer Science, National University of Singapore, Republic of Singapore' Sony AI2, School of Computer Science, Fudan University...
54,447
yMf3SLah5-y
neurips
2,021
main
NeurIPS.cc/2021/Conference
7,677
Optimal Uniform OPE and Model-based Offline Reinforcement Learning in Time-Homogeneous, Reward-Free and Task-Agnostic Settings
This work studies the statistical limits of uniform convergence for offline policy evaluation (OPE) problems with model-based methods (for episodic MDP) and provides a unified framework towards optimal learning for several well-motivated offline tasks. Uniform OPE $\sup_\Pi|Q^\pi-\hat{Q}^\pi|<\epsilon$ is a stronger me...
[ "Ming Yin", "Yu-Xiang Wang" ]
[ "Theory", "Reinforcement learning theory", "Markov decision process theory" ]
NeurIPS 2021 Poster
Accept (Poster)
The paper studies the uniform UPE problem in the offline RL setting with both upper and lower bound. Most of the reviewers believe the paper is well-written. There is one major concern about the epsilon range. The current results give the bound for eps<1/sqrt{S}. The paper would be much stronger if results about eps>1/...
4
[{"review_id": "J7eBA1FqWJi", "reviewer": "Reviewer_LF7M", "summary": "In this paper the authors provide uniform convergence results for offline policy evaluation, a minimax lower bound for global uniform OPE and an upper bound for local uniform OPE, with a new design analysis tool called singleton absorbing MDP. Besid...
Optimal Uniform OPE and Model-based Offline Reinforcement Learning in Time-Homogeneous, Reward-Free and Mask-Agnostic Settings Ming Yin 1.2 and Yu-Xiang Wang' Department of Computer Science, UC Santa Barbara Department of Statistics and Applied Probability, UC Santa Barbara Ling-yin@ucsb.. edu yuxiangu@cs. www.uuus.ee....
50,740
yq5MYHVaClG
neurips
2,021
main
NeurIPS.cc/2021/Conference
1,607
Emergent Communication of Generalizations
To build agents that can collaborate effectively with others, recent research has trained artificial agents to communicate with each other in Lewis-style referential games. However, this often leads to successful but uninterpretable communication. We argue that this is due to the game objective: communicating about a s...
[ "Jesse Mu", "Noah Goodman" ]
[ "emergent communication", "multi-agent communication", "language grounding", "compositionality" ]
NeurIPS 2021 Poster
Accept (Poster)
This work points out limitations of existing reference games, and proposes using communication for generalization over (concept-level) target sets instead of single targets. Different types of targets, distractors and target sets are explored. The reviewers agree the current work presents several good and useful contri...
4
[{"review_id": "rTc637UZBPn", "reviewer": "Reviewer_13s9", "summary": "The current manuscript proposes to study the emergence of language when agents communicate with each other about generalizations over a set of objects, as part of a referential game. Experiments with object sets yield improved and more interpretable...
Emergent Communication of Generalizations Jesse Mu Noah Goodman Stanford University Stanford University muj aerj@attaord.... ngoodmanSstanford..c edu Abstract To build agents that can collaborate effectively with others, recent research has trained artificial agents communicate with each other in Lewis-style referentia...
50,248
y_OmkmCH9w
neurips
2,021
main
NeurIPS.cc/2021/Conference
4,995
MST: Masked Self-Supervised Transformer for Visual Representation
Transformer has been widely used for self-supervised pre-training in Natural Language Processing (NLP) and achieved great success. However, it has not been fully explored in visual self-supervised learning. Meanwhile, previous methods only consider the high-level feature and learning representation from a global perspe...
[ "Zhaowen Li", "Zhiyang Chen", "Fan Yang", "Wei Li", "Yousong Zhu", "Chaoyang Zhao", "Rui Deng", "Liwei Wu", "Rui Zhao", "Ming Tang", "Jinqiao Wang" ]
[ "Self-supervised", "Transformer", "Attention-guided mask strategy" ]
NeurIPS 2021 Poster
Accept (Poster)
MST proposes to combine the task of mask language modeling with instance discrimination. While adding non-negligible complexity to training, MST demonstrate gain of the order of 2% on various standard SSL tasks (ImageNet linear eval, MS-COCO, Cityscape). Additionally, reviewers all agreed that the ablation experimen...
4
[{"review_id": "xegT8CEBjNC", "reviewer": "Reviewer_hFAf", "summary": "The paper proposes to add an image reconstruction task to the existing instance discrimination task, as the pre-text tasks for self-supervised learning. Specifically, the self-attention map from the teacher model is used to guide the random mask pro...
MST: Masked Self-Supervised Transformer for Visual Representation Zhaowen Lit** Zhiyang Chen Fan Yang Wei Li Yousong Zhu Ming Tang" Chaoyang Zhao Rui Dengov Liwei Wu Rui Zhao Jinqiao Wang** National Laboratory of Pattern Recognition, Institute of Automation, CAS *School of Artificial Intelligence, University of Chinese...
37,430
ypj3xKoRfmr
neurips
2,021
main
NeurIPS.cc/2021/Conference
8,864
Credal Self-Supervised Learning
Self-training is an effective approach to semi-supervised learning. The key idea is to let the learner itself iteratively generate "pseudo-supervision" for unlabeled instances based on its current hypothesis. In combination with consistency regularization, pseudo-labeling has shown promising performance in various doma...
[ "Julian Lienen", "Eyke Hüllermeier" ]
[ "semi-supervised learning", "self-training", "superset learning", "credal sets", "label relaxation", "pseudo-labeling" ]
NeurIPS 2021 Poster
Accept (Poster)
After the discussion, the reviewers agree that the paper proposes an interesting approach of using credal sets as a generalization of using probability estimates in self-supervision approaches. The resulting loss function is relatively simple and the authors describe one practical way to implement it. The paper is st...
4
[{"review_id": "wgGTi3nqk_M", "reviewer": "Reviewer_TVWJ", "summary": "For the task of semi-supervised learning, this work proposes the use of credal sets to model uncertainty in pseudo-labels. By proposing simple scheme for generating and learning from these credal sets, the authors show how the FixMatch baseline can ...
Credal Self-Supervised Learning Julian Lienen Department of Computer Science Eyke Hüllermeier Institute of Informatics University of Munich (LMU) Paderborn University Paderborn 33098, Germany Munich 80538, Germany eyke@ifi 1mu de julian. www.ennunn......mm de Abstract Self-training an effective approach to semi-supervi...
51,007
yTXtUSV-gk4
neurips
2,021
main
NeurIPS.cc/2021/Conference
8,021
Ising Model Selection Using $\ell_{1}$-Regularized Linear Regression: A Statistical Mechanics Analysis
We theoretically analyze the typical learning performance of $\ell_{1}$-regularized linear regression ($\ell_1$-LinR) for Ising model selection using the replica method from statistical mechanics. For typical random regular graphs in the paramagnetic phase, an accurate estimate of the typical sample complexity of $\ell...
[ "Xiangming Meng", "Tomoyuki Obuchi", "Yoshiyuki Kabashima" ]
[ "Ising model", "Lasso", "replica method", "statistical physics" ]
NeurIPS 2021 Poster
Accept (Poster)
The authors in this paper study the learning of a Boltzmann machine, aka the Inverse Ising model, a classical problem in graphical models and Markov Random Fields. They propose an analysis of the performance of the l1-regularised linear regression estimator in finding the underlying non-zero coefficients. The analysis ...
6
[{"review_id": "v9NfNWCAq1", "reviewer": "Reviewer_Y3wG", "summary": "The authors study the Ising model selection problem and compute analytically via the replica method the typical performances of the $\\ell_1$-regularized linear regression. They confirm the theoretical result by running a good number of numerical sim...
Ising Model Selection Using l--Reuularized Linear Regression: A Statistical Mechanics Analysis Xiangming Meng" Tomoyuki Obuchi Institute for Physics of Intelligence Department of Systems Science The University of Tokyo Kyoto University Kyoto 606-8501, Japan obuchi@i wootoiiikootuuuaa ac. 7-3-1, Hongo, Tokyo 113-0033, J...
46,252
yLyXqdsYho
neurips
2,021
main
NeurIPS.cc/2021/Conference
1,965
On Margin-Based Cluster Recovery with Oracle Queries
We study an active cluster recovery problem where, given a set of $n$ points and an oracle answering queries like ``are these two points in the same cluster?'', the task is to recover exactly all clusters using as few queries as possible. We begin by introducing a simple but general notion of margin between clusters th...
[ "Marco Bressan", "Nicolò Cesa-Bianchi", "Silvio Lattanzi", "Andrea Paudice" ]
[ "theory of clustering", "active learning", "clustering in metric spaces", "convex hulls", "clustering stability", "same-cluster queries" ]
NeurIPS 2021 Poster
Accept (Poster)
The paper discusses a clustering setting in which clusters are recovered using queries of the form “are two points x, x’ in the same cluster”, in a general notion of margin which generalizes previously studied cases. The paper is theoretical, with no experimental part. Reviews are high quality, with consistent pro-ac...
3
[{"review_id": "yjCh5d1bvEf", "reviewer": "Reviewer_BX75", "summary": "This paper considers the problem of finding convex clusters in space using a small number of cluster queries. It gives an algorithm for this problem, and also extends the algorithm to the one-vs-all margin. It further shows a connection to exact act...
On Margin-Based Cluster Recovery with Oracle Queries Marco Bressan Nicolò Cesa-Bianchi Dept. of CS, Univ. of Milan, Italy breso.b @ unimi.it DSRC & Dept. of CS, Univ. of Milan, Italy scoolo.cesaabianchi @ unnim..it Silvio Lattanzi Andrea Paudice Google silviol@ gomgle.com Dept. of CS, Univ. of Milan, Italy & Istituto I...
49,527
ye-NP0VZtLC
neurips
2,021
main
NeurIPS.cc/2021/Conference
4,093
Minimizing Polarization and Disagreement in Social Networks via Link Recommendation
Individual's opinions are fundamentally shaped and evolved by their interactions with other people, and social phenomena such as disagreement and polarization are now tightly woven into daily life. The quantification and optimization of these concepts have been the subject of much recent research behind a wealth of hi...
[ "Liwang Zhu", "Qi Bao", "Zhongzhi Zhang" ]
[ "Opinion dynamics", "social network", "graph algorithm", "discrete optimization" ]
NeurIPS 2021 Poster
Accept (Poster)
Generally all reviews for this paper were positive -- they appreciated the algorithmic contribution -- showing that a greedy approach to link recommendation for minimizing polarization+disagreement gives a bounded approximation ratio, despite the fact that the polarization+disagreement is not submodular. There were sig...
4
[{"review_id": "lbKxj0dctz", "reviewer": "Reviewer_HjPc", "summary": "In this paper, the authors study an opinion dynamics problem using the Friedkin-Johnsen model, where one aims to minimize the polarization + disagreement by adding up to k edges. The authors derive a greedy algorithm for this problem and show that it...
Minimizing Polarization and Disagreement in Social Networks via Link Recommendation Liwang Zhu, Qi Bao, and Zhongzhi Zhang" Shanghai Key Lab of Intelligent Information Processing, Fudan University, Shanghai, China School of Computer Science, Fudan University, Shanghai 200433, China {19210240147, 20110240002, zhangzz) @...
43,564
yKdYdQbo22W
neurips
2,021
main
NeurIPS.cc/2021/Conference
9,648
Provably Strict Generalisation Benefit for Invariance in Kernel Methods
It is a commonly held belief that enforcing invariance improves generalisation. Although this approach enjoys widespread popularity, it is only very recently that a rigorous theoretical demonstration of this benefit has been established. In this work we build on the function space perspective of Elesedy and Zaidi [8] t...
[ "Bryn Elesedy" ]
[ "generalization", "kernel methods", "invariance", "equivariance", "symmetry", "geometric deep learning", "statistical learning theory" ]
NeurIPS 2021 Poster
Accept (Poster)
The paper provides theoretical work on the invariance of positive definite kernels and RKHS. The obtained results are novel and of high theoretical significance. On the other hand, its practical meaning is not very clear in this paper. Overall, the theoretical development on the invariance of kernel/RKHS is signifi...
4
[{"review_id": "x5qbPH25vIU", "reviewer": "Reviewer_uJHC", "summary": "The authors present a theoretical generalization gap based on the invariance incorporated by the kernel. An orbit-averaged functional is defined with a set of invariant transformations G on data to produce an operator which maps a function to anothe...
Provably Strict Generalisation Benefit for Invariance in Kernel Methods Bryn Elesedy University of Oxford bryn@robots ox ac Abstract It is a commonly held belief that enforcing invariance improves generalisation. Although this approach enjoys widespread popularity, it is only very recently that a rigorous theoretical d...
37,738
ySFGlFjgIfN
neurips
2,021
main
NeurIPS.cc/2021/Conference
7,537
Towards Robust Bisimulation Metric Learning
Learned representations in deep reinforcement learning (DRL) have to extract task-relevant information from complex observations, balancing between robustness to distraction and informativeness to the policy. Such stable and rich representations, often learned via modern function approximation techniques, can enable pr...
[ "Mete Kemertas", "Tristan Ty Aumentado-Armstrong" ]
[ "reinforcement learning", "bisimulation", "state abstraction", "sparse rewards", "state similarity metrics", "representation learning", "continuous control" ]
NeurIPS 2021 Poster
Accept (Poster)
Three knowledgable reviewers recommended acceptance of the paper (2x accept, 1x weak accept) and one reviewer recommended (weak) rejection of the paper. The authors addressed most of the reviewers' concerns in their rebuttal but some concerns were not resolved. In the discussion about the paper we came to the conclusio...
4
[{"review_id": "mzhYCQxRL-C", "reviewer": "Reviewer_mSYo", "summary": "This paper considers the problem of learning good representations for reinforcement learning problems. It approaches this from the perspective of bisimulation metrics, extending earlier work on on-policy bisimulation for control (DBC). The paper ide...
Towards Robust Bisimulation Metric Learning Mete Kemertas" Department of Computer Science University of Toronto kemertas@cs toronto. edu Tristan Aumentado-AAmsstrong Department Computer Science University of Toronto taumenêcs toronto. edu Abstract Learned representations in deep reinforcement learning (DRL) have to ext...
50,384
yRTebElmilN
neurips
2,021
main
NeurIPS.cc/2021/Conference
10,304
Shared Independent Component Analysis for Multi-Subject Neuroimaging
We consider shared response modeling, a multi-view learning problem where one wants to identify common components from multiple datasets or views. We introduce Shared Independent Component Analysis (ShICA) that models each view as a linear transform of shared independent components contaminated by additive Gaussian noi...
[ "Hugo Richard", "Pierre Ablin", "Bertrand Thirion", "Alexandre Gramfort", "Aapo Hyvarinen" ]
[ "neuroimaging", "fMRI", "MEG", "shared response modeling", "component analysis", "independent component analysis", "multi-view learning" ]
NeurIPS 2021 Poster
Accept (Poster)
Two reviewers recommend rejection and two reviewers recommend acceptance. After reading the reviews, the rebuttal, the internal discussion among reviewers and after my own reading of the paper, I believe this work provides a genuine contribution to ICA for multiple views and therefore I recommend Accept. The method pro...
4
[{"review_id": "yFGU1vSDGAR", "reviewer": "Reviewer_v4RY", "summary": "The authors propose the shared independent component analysis which can account for shared components $\\mathbf{s}$ and individual differences through the noise signal $\\mathbf{n}$. Notably the noise can vary across subjects and views in magnitude ...
Shared Independent Component Analysis for Multi-Subject Neuroimaging Hugo Richard Pierre Ablin DMA CNRS and ENS Paris, France Bertrand Thirion Alexandre Gramfort Inria Université Paris- Inria Inria Université Paris-Saclay Université Paris-Saclay Palaiseau, France Palaiseau, France Palaiseau, France Aapo Hyvärinen Depar...
38,186
yTJtgA1Gh2
neurips
2,021
main
NeurIPS.cc/2021/Conference
5,237
Even your Teacher Needs Guidance: Ground-Truth Targets Dampen Regularization Imposed by Self-Distillation
Knowledge distillation is classically a procedure where a neural network is trained on the output of another network along with the original targets in order to transfer knowledge between the architectures. The special case of self-distillation, where the network architectures are identical, has been observed to improv...
[ "Kenneth Borup", "Lars Nørvang Andersen" ]
[ "Knowledge Distillation", "Self Distillation", "Kernel Ridge Regression", "Statistical Learning", "Theory" ]
NeurIPS 2021 Poster
Accept (Poster)
This paper provides an in-depth analysis of self-distillation in kernel regression setting, and studies the effect of using a weighted combination of ground truth labels and predictions made by the model, to define the next set of target values. It provides a closed form solution for the optimal choice of weighting par...
3
[{"review_id": "oK_cTA5MmSs", "reviewer": "Reviewer_wK1m", "summary": "This paper proposed a variant of self distillation which incorporates both the model output and the ground truth targets. The author provided a detailed theoretical analysis about how fixed weighted ground truth restrain the regularization amplified...
Even your Teacher Needs Guidance: Ground-Truth Targets Dampen Regularization Imposed by Self-Distillation Kenneth Borup Lars N. Andersen Department of Mathematics Aarhus University kemmethborup@math. au. dk Department of Mathematics Aarhus University larsa@math. au. Abstract Knowledge distillation is classically proced...
39,704
yUNQBMsLGA
neurips
2,021
main
NeurIPS.cc/2021/Conference
5,202
BCORLE($\lambda$): An Offline Reinforcement Learning and Evaluation Framework for Coupons Allocation in E-commerce Market
Coupons allocation is an important tool for enterprises to increase the activity and loyalty of users on the e-commerce market. One fundamental problem related is how to allocate coupons within a fixed budget while maximizing users' retention on the e-commerce platform. The online e-commerce environment is complicated ...
[ "Yang Zhang", "Bo Tang", "Qingyu Yang", "Dou An", "Hongyin Tang", "Chenyang Xi", "Xueying LI", "Feiyu Xiong" ]
[ "Application of E-commerce Market;Coupons Allocation", "Constrained Markov Decision Process", "Offline Reinforcement Learning", "Off-policy Evaluation" ]
NeurIPS 2021 Poster
Accept (Poster)
This paper received mixed reviews from the reviewers. Its strength is that it seems successful in proposing a budget constrained offline reinforcement learning and evaluation framework for the coupon allocation problem on e-commerce platforms. The drawbacks of the paper are the limited novelty of the approach (it appea...
4
[{"review_id": "yZ5dJ2LbIC_", "reviewer": "Reviewer_gJ2K", "summary": "The paper presents a new reinforcement learning-based solution to the problem of optimal coupons allocation. The authors build upon an existing method that relaxes the constrained RL problem using lagrangian multipliers but that requires repeated fu...
BCORLE(X): An Offline Reinforcement Learning and Evaluation Framework for Coupons Allocation in E-commerce Market Yang Zhang* Bo Tang Alibaba Group Faculty of Electronics and Information Xi'an an Jiaotong University Qingyu Yang Dou An Faculty of Electronics and Information Xi' an Jiaotong University Faculty Electronics...
47,928
yNzF41lHYV
neurips
2,021
main
NeurIPS.cc/2021/Conference
444
Believe What You See: Implicit Constraint Approach for Offline Multi-Agent Reinforcement Learning
Learning from datasets without interaction with environments (Offline Learning) is an essential step to apply Reinforcement Learning (RL) algorithms in real-world scenarios. However, compared with the single-agent counterpart, offline multi-agent RL introduces more agents with the larger state and action space, which i...
[ "Yiqin Yang", "Xiaoteng Ma", "Chenghao Li", "Zewu Zheng", "Qiyuan Zhang", "Gao Huang", "Jun Yang", "Qianchuan Zhao" ]
[ "Extrapolation Error", "Offline Reinforcement Learning", "Multi-Agent Reinforcement Learning" ]
NeurIPS 2021 Spotlight
Accept (Spotlight)
The paper addresses the problem of offline multi-agent reinforcement learning (MARL) and identifies a key challenge in that the accumulated extrapolation error increases with the number of agents. The authors propose a principled approach to address this problem, based on the idea of constraining Q-value estimations to...
4
[{"review_id": "u7Sen8QrJ6b", "reviewer": "Reviewer_P8yc", "summary": "Summary:\nExtrapolation error is a vital issue in offline reinforcement learning, and this paper addresses its impact and solution on both the single-agent and multi-agent scenario. The paper analyzed the root cause of extrapolation error with Batc...
Believe What You See: Implicit Constraint Approach for Offline Multi-Agent Reinforcement Learning Yiqin Yang"; Xiaoteng Ma Mj Chenghao Li', Zewu Zheng's Qiyuan Zhang", Gao Huang', Jun Yang' ; Qianchuan Zhao' 'Tsinguua University. "Harbin Institute of Technology [yangyiqi19 19, ma-xti7, lich18) tsiiiuu.tinngua..ee zhang...
45,674
yILzFBjR0Y
neurips
2,021
main
NeurIPS.cc/2021/Conference
1,605
GraphFormers: GNN-nested Transformers for Representation Learning on Textual Graph
The representation learning on textual graph is to generate low-dimensional embeddings for the nodes based on the individual textual features and the neighbourhood information. Recent breakthroughs on pretrained language models and graph neural networks push forward the development of corresponding techniques. The exis...
[ "Junhan Yang", "Zheng Liu", "Shitao Xiao", "Chaozhuo Li", "Defu Lian", "Sanjay Agrawal", "Amit S", "Guangzhong Sun", "Xing Xie" ]
[ "representation learning on textual graph", "text mining", "graph mining", "relevance modeling" ]
NeurIPS 2021 Poster
Accept (Poster)
This paper proposes a new model for learning representations on a textual graph. The key idea is to combine Transformer and graph neural network. Experimental results show that the proposed approach works better than the traditional cascade approach. Strength * The paper is generally clearly written, although there is...
4
[{"review_id": "WmgEa5y7DTD", "reviewer": "Reviewer_1iRC", "summary": "This paper proposes a means to encode text feature for nodes in textual graphs, where a layerwise aggregation module (implemented as a multi-head self-attention) is appended after each Transformer encoder block that aggregates the hidden representat...
GraphFormers: GNN-nested Transformers for Representation Learning on Textual Graph Junhan Yang"; Zheng Liu", Shitao Xiao", Chaozhuo Li*, Defu Lian', Sanjay Agrawal", Amit Singh", Guangzhong Sun', Xing Xie University of Science and Technology China, Hefei, China Microsoft Research Asia, Beijing, China Beijing University...
45,608
yGKklt8wyV
neurips
2,021
main
NeurIPS.cc/2021/Conference
9,526
Graph Neural Networks with Local Graph Parameters
Various recent proposals increase the distinguishing power of Graph Neural Networks (GNNs) by propagating features between k-tuples of vertices. The distinguishing power of these “higher-order” GNNs is known to be bounded by the k-dimensional Weisfeiler-Leman (WL) test, yet their O(n^k) memory requirements limit their ...
[ "Pablo Barcelo", "Floris Geerts", "Juan L Reutter", "Maksimilian Ryschkov" ]
[ "graph neural network", "GNN", "finite model theory", "homomorphism counts" ]
NeurIPS 2021 Poster
Accept (Poster)
This paper unifies several classes of recently proposed graph neural network architectures via the notion of graph homomorphisms. The reviewers agree that presented theoretical framework is novel. Even more importantly, it can be successfully applied to propose new GNN architectures. The paper is well written and provi...
4
[{"review_id": "qSX4jGaJXy_", "reviewer": "Reviewer_vuNp", "summary": "- This paper proposes F-MPNN, which extends the expressive power of MPNN whilst preserving their O(n) cost in each iteration.\n- This paper provides some theoretical guarantees for F-MPNNs. \n-- It shows that F-MPNN can be at most as expressive as F...
Graph Neural Networks with Local Graph Parameters Pablo Barceló' Floris Geerts Juan Reutter!:2 Makkimiliann Ryschkov Department of Computer Science, PUC, Chile 2 Millennium Institute for Foundational Research on Data, Chile Department of Computer Science, University of Antwerp, Belgium [pbarcelo.jreutter]] ding.puc.c. ...
57,090
yKoZfSVFtAx
neurips
2,021
main
NeurIPS.cc/2021/Conference
10,122
Efficient and Local Parallel Random Walks
Random walks are a fundamental primitive used in many machine learning algorithms with several applications in clustering and semi-supervised learning. Despite their relevance, the first efficient parallel algorithm to compute random walks has been introduced very recently (Łącki et al.). Unfortunately their method has...
[ "Michael Kapralov", "Silvio Lattanzi", "Navid Nouri", "Jakab Tardos" ]
[ "personalized pagerank", "random walks", "MPC", "clustering" ]
NeurIPS 2021 Poster
Accept (Poster)
The reviewers uniformly liked the algorithmic contribution of the paper. However, there were serious concerns about the writing quality of the paper. The reviewers strongly encourage the authors to polish the paper. Overall, the quality of the contributions were deemed to be of significant interest to warrant an accept...
4
[{"review_id": "q6GhxoDhKv8", "reviewer": "Reviewer_vgtt", "summary": "The paper introduces an algorithm for computing multiple random walks in parallel. Performance is measured using the MPC model. The authors also show an interesting application to local graph clustering. The proposed algorithm improves upon memory r...
Efficient and Local Parallel Random Walks Michael Kapralov Silvio Lattanzi Google Research si wilvoo@gooogle.coo EPFL michael kwppalovveppl.ccmmmmo ch Navid Nouri EPFL navid. navid.nouri@epfl. Jakab Tardos EPFL waw.bbbarddseepl..om ch Abstract Random walks are fundamental primitive used in many machine learning algo- r...
41,996
yDwfVD_odRo
neurips
2,021
main
NeurIPS.cc/2021/Conference
2,905
A 3D Generative Model for Structure-Based Drug Design
We study a fundamental problem in structure-based drug design --- generating molecules that bind to specific protein binding sites. While we have witnessed the great success of deep generative models in drug design, the existing methods are mostly string-based or graph-based. They are limited by the lack of spatial inf...
[ "Shitong Luo", "Jiaqi Guan", "Jianzhu Ma", "Jian Peng" ]
[ "molecule generation", "3d generative models", "drug design" ]
NeurIPS 2021 Poster
Accept (Poster)
The AC and reviewers all agree that this is an interesting submission. We strongly urge the authors to incorporate their clarifying comments into the manuscript. In addition, as mentioned by several reviewers, it is important that the authors be clear about the relative value of Vina score. As noted by kPk1, coming u...
4
[{"review_id": "vu4qqTQFgpz", "reviewer": "Reviewer_GUvn", "summary": "The paper proposed a deep generative model to design novel molecules in 3D space that bind to specific targets. Concretely, given the binding site as context, the generative model estimates the probability density of atoms’ occurrences in 3D space. ...
A 3D Generative Model for Structure-Based Drug Design Shitong Luo HeliXon Research luost@helixoo com 1uost26@gmail.c com Jiaqi Guan University of Illinois Urbana- Urbana-Champaa Champaign jiaqi0illinois edu Jianzhu Ma Peking University majiinni majianzhu@pku. edu. Jian Peng University of Illinois Urbana- Champaann jian...
37,409
y8y6GJUL01H
neurips
2,021
main
NeurIPS.cc/2021/Conference
5,406
No-regret Online Learning over Riemannian Manifolds
We consider online optimization over Riemannian manifolds, where a learner attempts to minimize a sequence of time-varying loss functions defined on Riemannian manifolds. Though many Euclidean online convex optimization algorithms have been proven useful in a wide range of areas, less attention has been paid to their R...
[ "Xi Wang", "Zhipeng Tu", "Yiguang Hong", "Yingyi Wu", "Guodong Shi" ]
[ "Online optimization", "Riemannian manifolds", "Riemannian optimization" ]
NeurIPS 2021 Poster
Accept (Poster)
This paper considers online optimization problems in Hadamard manifolds - i.e., simply connected, complete Riemannian manifolds with everywhere non-positive sectional curvature. The authors consider several different settings - full-gradient versus gradient-free algorithms against both convex or strongly-convex objecti...
4
[{"review_id": "w4srLQuwxro", "reviewer": "Reviewer_6h8i", "summary": "This paper takes online learning algorithms designed for the Euclidean space (algorithms for online convex optimization with convex or strongly convex functions in the full information case. Or the convex case with bandit feedback) and generalizes t...
No-regret Online Learning over Riemannian Manifolds Xi Wang AMSS Zhipeng Tu AMSS Yiguang Hong" Tongji University Shanghai, China yghonggiss ac Chinese Academy of Sciences Chinese Academy of Sciences Beijing, China Beijing, China wangxi140mails. ucas. ac tuzhipeng@amss.acco cn Yingyi Wu Guodong Shi University of Chinese...
38,689
yehlf2AvSD_
neurips
2,021
main
NeurIPS.cc/2021/Conference
4,350
Mean-Square Analysis with An Application to Optimal Dimension Dependence of Langevin Monte Carlo
Sampling algorithms based on discretizations of Stochastic Differential Equations (SDEs) compose a rich and popular subset of MCMC methods. This work provides a general framework for the non-asymptotic analysis of sampling error in 2-Wasserstein distance, which also leads to a bound of mixing time. The method applies t...
[ "Ruilin Li", "Hongyuan Zha", "Molei Tao" ]
[ "mean-square analysis", "SDE-based sampling algorithm", "Langevin Monte Carlo", "dimension dependence", "non-asymptotic error analysis" ]
NeurIPS 2021 Submitted
Reject
This is a good paper, but unfortunately authors missed an important reference [Thm 1, Li et. al., 2019], and proved a result that is almost identical to the one in that paper. Comparing [Thm 1, Li et. al., 2019] and Thms 3.3, 3.4 and Cor 3.5 in the current paper, one notices that the results are almost identical, wit...
4
[{"review_id": "tsOugZ5MKwb", "reviewer": "Reviewer_j2p5", "summary": "This paper introduces a framework for analyzing the discretization of SDEs such as the Langevin diffusion. Within this framework, once the discretization error incurred by the algorithm in a single iteration is controlled, it automatically yields a ...
Mean-Square Analysis with An Application to Optimal Dimension Dependence of Langevin Monte Carlo Anonymous Author(s) Affiliation Address email Abstract Sampling algorithms based on discretizations of Stochastic Differential Equations (SDEs) compose a rich and popular subset of MCMC methods. This work pro- vides a gener...
38,945
yLEcG62ANX
neurips
2,021
main
NeurIPS.cc/2021/Conference
6,609
Directed Graph Contrastive Learning
Graph Contrastive Learning (GCL) has emerged to learn generalizable representations from contrastive views. However, it is still in its infancy with two concerns: 1) changing the graph structure through data augmentation to generate contrastive views may mislead the message passing scheme, as such graph changing action...
[ "Zekun Tong", "Yuxuan Liang", "Henghui Ding", "Yongxing Dai", "Xinke Li", "Changhu Wang" ]
[ "Contrastive Learning", "Directed Graph", "Graph Neural Network", "Curriculum Learning" ]
NeurIPS 2021 Poster
Accept (Poster)
This paper proposes for digraphs a contrastive learning pipeline, which consists of Laplacian permutation and a multi-view digraph contrastive learning framework that leverages multi-task curriculum learning. The proposed Laplacian permutation is a data augmentation method that is proved theoretically to provide contra...
4
[{"review_id": "tFQOsGbqL7t", "reviewer": "Reviewer_bT23", "summary": "This paper designs a contrastive learning framework for directed graphs (Digraph). It tackles two problems in the domain: \n- previous method to generate contrastive views (e.g., dropping edge/node) can mislead the graph topology. \n- there is a lim...
Directed Graph Contrastive Learning Zekun Tong' Yuxuan Liang' Henghui Ding 2,3,* Yongxing Dai Xinke Li' Changhu Wang² National University Singapore 'ByteDance "ETH Zürich "Peking University {zekuntong, 1 Liangyuxuan, xinke. 1i] @u nus. .du edu henghui www.ddiiiiiiooooo ee ethz. ch, yongxingdai edu. cn changhu. wang@byt...
53,948
yCA2i3bGbfC
neurips
2,021
main
NeurIPS.cc/2021/Conference
10,730
Identification of Partially Observed Linear Causal Models: Graphical Conditions for the Non-Gaussian and Heterogeneous Cases
In causal discovery, linear non-Gaussian acyclic models (LiNGAMs) have been studied extensively. While the causally sufficient case is well understood, in many real problems the observed variables are not causally related. Rather, they are generated by latent variables, such as confounders and mediators, which may them...
[ "Jeffrey Adams", "Niels Richard Hansen", "Kun Zhang" ]
[ "Causal Discovery", "Structural Equation Models", "Latent Variable Modeling" ]
NeurIPS 2021 Poster
Accept (Poster)
The reviewers appreciated the relevance of the paper to the NeurIPS community and the strength of the technical results. Additionally, the authors did a good job of engaging with the reviewers to dispel most concerns. Hopefully, the discussion clarified to the authors which parts of the paper are less readable, and t...
4
[{"review_id": "duy4ia4JVkG", "reviewer": "Reviewer_jP5B", "summary": "The paper provides necessary and sufficient graphical conditions for full identification of linear causal models with latent variables. It introduces the so-called ‘bottleneck’ and ‘strong-redundancy’ condition in combination with bottleneck faithfu...
Identification of Partially Observed Linear Causal Models: Graphical Conditions for the Non-Gaussian and Heterogeneous Cases Jeffrey Adams Niels Richard Hansen'. Kun Zhang Department of Mathematical Sciences, University of Copenhagen, Denmark Department of Philosophy, Carnegie Mellon University, Pittsburgh, USA ja@math...
41,348
yJqcM36Qvnu
neurips
2,021
main
NeurIPS.cc/2021/Conference
10,317
Federated Graph Classification over Non-IID Graphs
Federated learning has emerged as an important paradigm for training machine learning models in different domains. For graph-level tasks such as graph classification, graphs can also be regarded as a special type of data samples, which can be collected and stored in separate local systems. Similar to other domains, mul...
[ "Han Xie", "Jing Ma", "Li Xiong", "Carl Yang" ]
[ "federated learning", "graph classification", "non-iid graphs", "structure heterogeneity", "feature heterogeneity" ]
NeurIPS 2021 Poster
Accept (Poster)
There was a fruitful discussion about this paper. Overall, I feel that the paper contains enough interesting and novel ideas for a publication.
3
[{"review_id": "zbL4mRXhJBH", "reviewer": "Reviewer_RhpX", "summary": "This paper advocates a novel setting of cross-dataset and cross-domain federated learning for graph classification, which allows multiple data owners with graphs of non-iid structures and features to collaboratively train powerful graph classifiers ...
Federated Graph Classification over Non-IID Graphs Han Xie, Jing Ma, Li Xiong, Carl Yang" Department of Computer Science, Emory University {han. xiee jing ma, lxiong, y.callyag))emmory... edu Abstract Federated learning has emerged an important paradigm for training machine learning models in different domains. For gra...
52,529
xmMHxfE1qS6
neurips
2,021
main
NeurIPS.cc/2021/Conference
1,271
Adversarially Robust Change Point Detection
Change point detection is becoming increasingly popular in many application areas. On one hand, most of the theoretically-justified methods are investigated in an ideal setting without model violations, or merely robust against identical heavy-tailed noise distribution across time and/or against isolate outliers; on th...
[ "Mengchu Li", "Yi Yu" ]
[ "change point detection", "robust statistics" ]
NeurIPS 2021 Poster
Accept (Poster)
The paper gives an adversarially robust way to perform change point detection in univariate time series analysis. Adversarial robustness in this context has not been considered before, and this paper provides a new algorithm as well as empirical evidence.
3
[{"review_id": "uzSLrtaYNYy", "reviewer": "Reviewer_J2f5", "summary": "The paper addresses the problem of change point detection for univariate time series in the presence of adversarial attack through the introduction of data perturbations. A novel method is introduced for this problem based on a robust variant of the...
Adversarially Robust Change Point Detection Mengehu Li Department of Statisties University of Warwick mengchu.. li @wwrwwwwckk...ooo.... ac ac.uk uk Yi Yu Department of Statistics University of Warwick yi. yu.yu. 2wwarwwwwwwccccc ac Abstract Change point detection is becoming increasingly popular in many application ar...
43,948
y2p9IIXwdg2
neurips
2,021
main
NeurIPS.cc/2021/Conference
3,723
Deconvolutional Networks on Graph Data
In this paper, we consider an inverse problem in graph learning domain -- "given the graph representations smoothed by Graph Convolutional Network (GCN), how can we reconstruct the input graph signal?" We propose Graph Deconvolutional Network (GDN) and motivate the design of GDN via a combination of inverse filters in ...
[ "Jia Li", "Jiajin Li", "Yang Liu", "Jianwei Yu", "Yueting Li", "Hong Cheng" ]
[ "deconvolutional networks", "inverse problem", "graph generation" ]
NeurIPS 2021 Poster
Accept (Poster)
The paper proposes a (niche) topic that is still not well explored, i.e. the definition of an efficient and effective deconvolutional network for graph data. This is done by mainly exploiting already existing results, however introducing a significant level of novelty. During the discussion period the authors were able...
4
[{"review_id": "XNmpMKqIBE5", "reviewer": "Reviewer_Vnte", "summary": "This paper derives an inverse operator, which is said to be better than other methods in ref [50] and GALA. The work also points out the inverse operation results in a high pass filter and may amplify the noise. Motivated by this observation, a de-n...
Deconvolutional Networks on Graph Data Jia Li", Jiajin LiP, Yang Liu', Jianwei Yu², Yueting Li", Hong Cheng? Hong Kong University Science and Technology The Chinese University of Hong Kong jialeeEust. hk Abstract In this paper, consider an inverse problem in graph learning domain "given the graph representations smooth...
36,186
yITJ6t31eAE
neurips
2,021
main
NeurIPS.cc/2021/Conference
6,966
Lattice partition recovery with dyadic CART
We study piece-wise constant signals corrupted by additive Gaussian noise over a $d$-dimensional lattice. Data of this form naturally arise in a host of applications, and the tasks of signal detection or testing, de-noising and estimation have been studied extensively in the statistical and signal processing literatur...
[ "OSCAR HERNAN MADRID PADILLA", "Yi Yu", "Alessandro Rinaldo" ]
[ "Classification and Regression Trees (CART)", "Recursive Dyadic Partitions", "Piecewise Constant Signals", "Partition Recovery" ]
NeurIPS 2021 Poster
Accept (Poster)
This paper studies the problem of recovering the partition of a piecewise constant signal supported on a multi-dimensional lattice. Specifically, the true signal is assumed to be piecewise constant over an unknown rectangular partition of the lattice. Given access to noisy measurements, the goal is to recover the under...
3
[{"review_id": "aeNtCPKp8Zl", "reviewer": "Reviewer_SRUt", "summary": "The authors consider the problem of recovering the partition underlying a piecewise constant function over a $d$-dimensional lattice, _i.e._, a signal with support $L_{d,n} = \\lbrace 1, \\dots, n\\rbrace^d$. That is, we wish to recover the structur...
Lattice partition recovery with dyadic CART Oscar Hernan Madrid Padilla Yi Yu Department Statistics Department of Statistics University California, Los Angeles University of Warwick yi. www.wwwwk..c...mo ac oscar o madrid@stat. wulpstttttttt..........e..e.. ucla. edu Alessandro Rinaldo Department of Statisties & Data S...
41,610
y7l4h5xtaqQ
neurips
2,021
main
NeurIPS.cc/2021/Conference
1,213
A variational approximate posterior for the deep Wishart process
Recent work introduced deep kernel processes as an entirely kernel-based alternative to NNs (Aitchison et al. 2020). Deep kernel processes flexibly learn good top-layer representations by alternately sampling the kernel from a distribution over positive semi-definite matrices and performing nonlinear transformations. A...
[ "Sebastian W. Ober", "Laurence Aitchison" ]
[ "deep Gaussian process", "Bayesian inference", "variational inference" ]
NeurIPS 2021 Poster
Accept (Poster)
The authors propose a variational approximation scheme for Deep Kernel Processes (DKP). Building on previous work, the DKP is an alternative representation of the Deep Gaussian Process (DGP), where the representation is over Gramm matrices rather than over function evaluations. Reviewers are in broad agreement that t...
4
[{"review_id": "ysCA7pOZW4-", "reviewer": "Reviewer_UZcb", "summary": "The authors introduce an approximate inference procedure for the deep Wishart process. In order to do so, they define a generalised singular Wishart process and use this as a variational posterior. The DWP appears to be a model consisting of layers ...
A variational approximate posterior for the deep Wishart process Sebastian W. Ober Department of Engineering University of Cambridge Cambridge, UK swo250cam. ac. uk Laurence Aitchison Department of Computer Science University of Bristol Bristol, UK laurence www..aainsssnnssssssooocco ac.uk Abstract Recent work introduc...
38,319
yAIYc7YjGbd
neurips
2,021
main
NeurIPS.cc/2021/Conference
1,914
Solving Soft Clustering Ensemble via $k$-Sparse Discrete Wasserstein Barycenter
Clustering ensemble is one of the most important problems in ensemble learning. Though it has been extensively studied in the past decades, the existing methods often suffer from the issues like high computational complexity and the difficulty on understanding the consensus. In this paper, we study the more general so...
[ "Ruizhe Qin", "Mengying Li", "Hu Ding" ]
[ "ensemble clustering", "Wasserstein barycenter", "soft clustering", "approximation algorithm" ]
NeurIPS 2021 Poster
Accept (Poster)
The majority reviewers are in favor of accepting this paper. The reviewers in general liked the connection made between the soft clustering ensemble problem and discrete Wasserstein barycenter. The hardness result and convergence analysis rounded out the paper. However, there was concern about the lack of direct pra...
4
[{"review_id": "iXGYNggwWTY", "reviewer": "Reviewer_jwX5", "summary": "This paper proposes a new approach for soft clustering ensemble, using a new problem formulation (a “geometric prototype”), which is equivalent to a Discrete Wasserstein Barycenter problem.\n\nThe proposed algorithm achieves, with a fixed probabilit...
Solving Soft Clustering Ensemble via k: k:-pparse Discrete Wasserstein Barycenter Ruizhe Qin' Mengying Li² Hu Ding Schooo of Computer Science and Technology "Scoool Data Science University of Science and Technology of China red468mail, wwt.tu.t..........eee edu. en, limengy@mail. uutc. wuttc.eu..co edu.cn, hudingoustc....
45,821
xwGeq7I4Opv
neurips
2,021
main
NeurIPS.cc/2021/Conference
3,933
Perturbation-based Regret Analysis of Predictive Control in Linear Time Varying Systems
We study predictive control in a setting where the dynamics are time-varying and linear, and the costs are time-varying and well-conditioned. At each time step, the controller receives the exact predictions of costs, dynamics, and disturbances for the future $k$ time steps. We show that when the prediction window $k$ i...
[ "Yiheng Lin", "Yang Hu", "Guanya Shi", "Haoyuan Sun", "Guannan Qu", "Adam Wierman" ]
[ "Linear time-varying system", "Online convex optimization", "Predictive control", "Sensitivity analysis" ]
NeurIPS 2021 Spotlight
Accept (Spotlight)
The reviewers were unanimous in their appreciation of the paper and hence I recommend a clear Accept. I request the authors to look into improving the notation in the paper and re-assessing the paper in terms of clarity of presentation of the proofs etc to improve readability. Suggestions of this form have been laid ou...
4
[{"review_id": "u0SSmKOUG4B", "reviewer": "Reviewer_Dpte", "summary": "The paper considers the control of a linear time-varying system with predictions. The predictions must be accurate and describe both the transition parameters, the noise, and the cost functions. The length of the prediction window must be larger tha...
Perturbation-based Regret Analysis of Predictive Control in Linear Time Varying Systems Yiheng Lin Yang Hu Tsinghua University Beijing. China California Institute of Technology Pasadena, CA, USA yihengl healllllllleheeeeeeee edu huy18 1oy188mails ls tsinghua edu Guanya Shi Haoyuan Sun California Institute of Technology...
42,427
zHj5fx11jQC
neurips
2,021
main
NeurIPS.cc/2021/Conference
10,112
Marginalised Gaussian Processes with Nested Sampling
Gaussian Process models are a rich distribution over functions with inductive biases controlled by a kernel function. Learning occurs through optimisation of the kernel hyperparameters using the marginal likelihood as the objective. This work proposes nested sampling as a means of marginalising kernel hyperparameters, ...
[ "Fergus Simpson", "Vidhi Lalchand", "Carl Edward Rasmussen" ]
[ "Gaussian Processes", "nested sampling", "Bayesian inference" ]
NeurIPS 2021 Poster
Accept (Poster)
The paper proposes the use of nested sampling (NS) for inference in Gaussian process (GP) models with Gaussian likelihoods and shows the benefits of this fully Bayesian approach over competing methods such as HMC and type-II marginal likelihood hyper-parameter estimation. Although NS has been studied previously and the...
4
[{"review_id": "znLeJ4qdXDC", "reviewer": "Reviewer_XcBt", "summary": "The paper suggests using the nested sampling algorithm to marginalize hyperparameters of Gaussian processes with spectral mixture kernels. The motivation is that the posterior over hyperparameters of such kernels is highly multimodal and nested samp...
Marginalised Gaussian Processes with Nested Sampling Fergus Simpson" Secondmind Cambridge, UK fergus@secondmind.. ai Vidhi Lalchand University of Cambridge, UK vr3080cam. ac uk Carl E. Rasmussen University of Cambridge, UK cer540cam ac. uk Abstract Gaussian Process models are rich distribution over functions with induc...
42,600
xmx5rE9QP7R
neurips
2,021
main
NeurIPS.cc/2021/Conference
10,875
You Are the Best Reviewer of Your Own Papers: An Owner-Assisted Scoring Mechanism
I consider the setting where reviewers offer very noisy scores for a number of items for the selection of high-quality ones (e.g., peer review of large conference proceedings) whereas the owner of these items knows the true underlying scores but prefers not to provide this information. To address this withholding of in...
[ "Weijie J Su" ]
[ "Peer review", "mechanism design", "ranking", "isotonic regression", "utility", "convex optimization" ]
NeurIPS 2021 Poster
Accept (Poster)
Researchers in machine learning and many other fields have regularly complained about the quality of reviews. This paper proposes a novel idea to mitigate the "noise" in the decisions in a peer-reviewed conference. The idea leverages the known fact from statistics and optimization that given a ranking of papers, isoton...
4
[{"review_id": "phA1lSzXIUX", "reviewer": "Reviewer_a48h", "summary": "This paper introduces the Isotonic Mechanism, which uses the ranking data from authors to get rid of the noisy scores from reviewers. Theoretically, if using the proposed mechanism, the best strategy is to report the correct ranking, which justifies...
You Are the Best Reviewer of Your Own Papers: An Owner-Assisted Scoring Mechanism Weijie Su Department of Statistics and Data Science University of Pennsylvania suw@wharton. upenn edu Abstract consider the setting where reviewers offer very noisy scores for a number items for the selection of high-quality ones (e.g., p...
37,152
yhjpeuWepoj
neurips
2,021
main
NeurIPS.cc/2021/Conference
584
Exploiting the Intrinsic Neighborhood Structure for Source-free Domain Adaptation
Domain adaptation (DA) aims to alleviate the domain shift between source domain and target domain. Most DA methods require access to the source data, but often that is not possible (e.g. due to data privacy or intellectual property). In this paper, we address the challenging source-free domain adaptation (SFDA) problem...
[ "Shiqi Yang", "Yaxing Wang", "Joost van de weijer", "Luis Herranz", "SHANGLING JUI" ]
[ "source-free domain adaptation", "reciprocal nearest neighbors" ]
NeurIPS 2021 Poster
Accept (Poster)
Following the rebuttal and discussion period, this paper received borderline scores with three leaning towards acceptance and one recommending rejection. All reviewers agreed that this paper proposes a novel algorithm that leverages local structure in a way different from prior work (using nearest neighbor consistency ...
4
[{"review_id": "tSRw9Hfs3Sm", "reviewer": "Reviewer_4Ef6", "summary": "This paper proposes a method for source-free domain adaptation (SFDA). While methods for unsupervised domain adaptation access the labeled source domain data during adaptation, SFDA does not allow models to do so. The proposed method utilizes the ne...
Exploiting the Intrinsic Neighborhood Structure for Source-free Domain Adaptation Shiqi Yang', Yaxing Wang Joost van Weijer', Luis Herranz', Shangling Jui Computer Vision Center, Universitat Autonoma de Barcelona, Barcelona, Spain 2 PCALab, Nanjing University of Science and Technology, China Huawei Kirin Solution, Shan...
46,650
yH2VrkpiCK6
neurips
2,021
main
NeurIPS.cc/2021/Conference
1,334
A Prototype-Oriented Framework for Unsupervised Domain Adaptation
Existing methods for unsupervised domain adaptation often rely on minimizing some statistical distance between the source and target samples in the latent space. To avoid the sampling variability, class imbalance, and data-privacy concerns that often plague these methods, we instead provide a memory and computation-ef...
[ "Korawat Tanwisuth", "XINJIE FAN", "Huangjie Zheng", "Shujian Zhang", "Hao Zhang", "Bo Chen", "Mingyuan Zhou" ]
[ "domain adaptation", "Bayesian methods", "distribution matching", "data privacy", "class imbalance", "computer vision", "deep learning" ]
NeurIPS 2021 Poster
Accept (Poster)
Thanks for your submission to NeurIPS. This paper had somewhat mixed reviews, with 3 mostly positive and 1 more negative reviewer. During the rebuttal phase, the negative reviewer maintained his/her position that the paper was not ready for publication, so we did not fully reach consensus on the paper. However, I ap...
4
[{"review_id": "zkDjXP3E4ck", "reviewer": "Reviewer_KNFV", "summary": "This paper proposes creating prototypes(p) for the source data and aligning these prototypes with the target(t) data via bidirectional conditional transport measure. They use last linear layer's weights as prototypes, so it saves cost, and their met...
A Prototype-Oriented Framework for Unsupervised Domain Adaptation Korawat Tanwisuth" "1 Xinjie Fan Huangjie Zheng" .Suujia Zhang', Hao Zhang", Bo Chen Mingyuan Zhou' 'The University of Texas at Austin 2 Cornell University Xiiiian University korawat tanvisuth@utexas edu, mingyuan zhou@mccombs utexas. edu Abstract Existi...
56,775
xmX-WjAsf8y
neurips
2,021
main
NeurIPS.cc/2021/Conference
4,308
Risk-Averse Bayes-Adaptive Reinforcement Learning
In this work, we address risk-averse Bayes-adaptive reinforcement learning. We pose the problem of optimising the conditional value at risk (CVaR) of the total return in Bayes-adaptive Markov decision processes (MDPs). We show that a policy optimising CVaR in this setting is risk-averse to both the epistemic uncertain...
[ "Marc Rigter", "Bruno Lacerda", "Nick Hawes" ]
[ "reinforcement learning", "planning", "model-based bayesian reinforcement learning", "risk" ]
NeurIPS 2021 Poster
Accept (Poster)
This paper proposes a method to optimize a CVaR return in Bayes-adaptive MDPs which captures both intrinsic and parametric uncertainty. Interestingly, the paper shows that optimizing CVaR in a Bayesian setting is equivalent to finding a stable equilibrium for an adversarial two player game. The approach is demonstrated...
4
[{"review_id": "qPDuoM1OU2b", "reviewer": "Reviewer_ZrWX", "summary": "The authors introduce the problem of optimising the CVaR in the context of Bayes-Adaptive MDPs, in other words optimising a particular risk metric when faced with uncertainty about the transition dynamics in a Bayesian sequential decision making con...
Risk-Averse Bayes-Adaptive Reinforcement Learning Mare Rigter Bruno Lacerda Oxford Robotics Institute University of Oxford mrigter@robots. ox ac. uk Oxford Roboties Institute University of Oxford brunodrobots. ox ac Nick Hawes Oxford Roboties Institute University Oxford ni nickhOrobots.com ox. ac. uk Abstract In this w...
46,419
xj2sE--Q90e
neurips
2,021
main
NeurIPS.cc/2021/Conference
4,630
Understanding End-to-End Model-Based Reinforcement Learning Methods as Implicit Parameterization
Estimating the per-state expected cumulative rewards is a critical aspect of reinforcement learning approaches, however the experience is obtained, but standard deep neural-network function-approximation methods are often inefficient in this setting. An alternative approach, exemplified by value iteration networks, is ...
[ "Clement Gehring", "Kenji Kawaguchi", "Jiaoyang Huang", "Leslie Pack Kaelbling" ]
[ "model-based", "reinforcement learning", "end-to-end", "implicit" ]
NeurIPS 2021 Poster
Accept (Poster)
Initially some of the reviewers were concerned with the limiting assumptions used. However, after the author response, all of the reviewers agree that the theoretical results while limited in some ways are interesting and significant enough for acceptance. As reviewer 6XLT points out, adding a balanced discussion of th...
4
[{"review_id": "i4MLr2oHAC9", "reviewer": "Reviewer_u8GD", "summary": "This work introduces learning models under a different parameterization that the authors call implicit parameterization, where instead of training value estimation parameters directly, it is learned implicitly through reward and dynamics model estim...
Understanding End-to-End Model-Based Reinforcement Learning Methods as Implicit Parameterization Clement Gehring Electrical Engineering and Computer Sciences Massachusetts Institute of Technology cl clement@gebring.22 Kenji Kawaguchi Center of Mathematical Sciences and Applications Harvard University harvard.edu Jiaoya...
41,758
xmJsuh8xlq
neurips
2,021
main
NeurIPS.cc/2021/Conference
3,090
PortaSpeech: Portable and High-Quality Generative Text-to-Speech
Non-autoregressive text-to-speech (NAR-TTS) models such as FastSpeech 2 and Glow-TTS can synthesize high-quality speech from the given text in parallel. After analyzing two kinds of generative NAR-TTS models (VAE and normalizing flow), we find that: VAE is good at capturing the long-range semantics features (e.g., pros...
[ "Yi Ren", "Jinglin Liu", "Zhou Zhao" ]
[ "text to speech", "speech synthesis", "lightweight architecture", "non-autoregressive" ]
NeurIPS 2021 Poster
Accept (Poster)
The paper presents an interesting blend of VAE and flow based approach for TTS. The reviewers raised several points about the comparisons -- including that some of the baselines are possibly not as good as the original work, since original implementations were not released and third party implementations had to be used...
4
[{"review_id": "voJduH4Uluz", "reviewer": "Reviewer_e88V", "summary": "This paper summarizes a text-to-speech system which uses a VAE with a normalizing flow-based prior to generate spectrograms, and then uses a GAN model to convert the synthesized spectrogram to a waveform. \n ", "questions": "", "limitations": "", "...
PortaSpeech: Portable and High-Quality Generative Text-to-Speech Ren" Zhejiang University rayerenśzju. edu. Jinglin Liu" Zhejiang University jinglinliuOzju. edu. en Zhou Zhao Zhejiang University zhaozhou@zju. edu cn Abstract Non-autoregressive text-to-speech (NAR-TTS) models such FastSpeech 2 [24] and Glow-TTS { can sy...
44,113
xdk17QJpf5q
neurips
2,021
main
NeurIPS.cc/2021/Conference
5,979
Accommodating Picky Customers: Regret Bound and Exploration Complexity for Multi-Objective Reinforcement Learning
In this paper we consider multi-objective reinforcement learning where the objectives are balanced using preferences. In practice, the preferences are often given in an adversarial manner, e.g., customers can be picky in many applications. We formalize this problem as an episodic learning problem on a Markov decision p...
[ "Jingfeng Wu", "Vladimir Braverman", "Lin Yang" ]
[ "RL", "multi-objective", "sample complexity", "unsupervised exploration" ]
NeurIPS 2021 Poster
Accept (Poster)
This paper looks at an interesting variant of Reinforcement/Online learning, where users arrive with different preferences over several criteria. The problem is therefore linked to multi-objective optimization. Preferences could have been modelled and tackled differently than by linear aggregation, but one must start s...
5
[{"review_id": "ZcR6FYxIG72", "reviewer": "Reviewer_5wS9", "summary": " This paper studies the multi-objective reinforcement learning (MORL) problem where the reward function is an inner product of two d-dimensional vectors, an objective vector and a preference vector. Two algorithms have been proposed for known and no...
A ccommodating Picky Customers: Regret Bound and Exploration Complexity for Multi-Objective Reinforcement Learning Jingfeng Wu Department of Computer Science Johns Hopkins University Baltimore, MD 21218 uuujf0jhu. edu Vladimir Braverman Department of Computer Science Johns Hopkins University Baltimore, MD 21218 vovašcs...
44,214
xkQ4MhLv52X
neurips
2,021
main
NeurIPS.cc/2021/Conference
3,135
Learning-Augmented Dynamic Power Management with Multiple States via New Ski Rental Bounds
We study the online problem of minimizing power consumption in systems with multiple power-saving states. During idle periods of unknown lengths, an algorithm has to choose between power-saving states of different energy consumption and wake-up costs. We develop a learning-augmented online algorithm that makes decision...
[ "Antonios Antoniadis", "Christian Coester", "Marek Elias", "Adam Polak", "Bertrand Simon" ]
[ "Learning-augmented algorithms", "online algorithms", "energy-efficient algorithms", "power management", "ski rental" ]
NeurIPS 2021 Poster
Accept (Poster)
This paper was borderline. The reviewers appreciated the contributions of this paper to the online algorithms literature, but some reviewers were skeptical about the learning aspect (fit for neurips vs theory conferences) and the application to power management. After discussions, I believe that papers in the area of a...
4
[{"review_id": "b-86kN1QfhR", "reviewer": "Reviewer_LdW3", "summary": "This paper is motivated by the challenge of dynamic power management in environments such as data centers. Computational devices like CPUs can transition to a sleep mode, for the purpose of preserving power. However, \"waking up\" entails consuming ...
Learning-Augmented Dynamic Power Management with Multiple States via New Ski Rental Bounds Antonios Antoniadis University of Twente Enschede, The Netherlands antoni adis@utwente. nl Christian Coester Tel Aviv University Tel Aviv, Israel christian. cew.eiaan.oesttermmaal..oom Marek Eliáš Adam Polak EPFL Bertrand Simon U...
44,273
yAvCV6NwWQ
neurips
2,021
main
NeurIPS.cc/2021/Conference
4,249
On Linear Stability of SGD and Input-Smoothness of Neural Networks
The multiplicative structure of parameters and input data in the first layer of neural networks is explored to build connection between the landscape of the loss function with respect to parameters and the landscape of the model function with respect to input data. By this connection, it is shown that flat minima regul...
[ "Chao Ma", "Lexing Ying" ]
[ "Stochastic Gradient Descent", "Implicit regularization", "Linear stability", "Sobolev Seminorm" ]
NeurIPS 2021 Spotlight
Accept (Spotlight)
All reviewers have praised the clarity of the paper as well as the soundness and novelty of the results. Some important questions regarding completeness of some definitions and applicability of the current assumptions have been raised in the reviews, which the authors have clarified with a strong rebuttal. I encourage ...
4
[{"review_id": "pvf7m9rXrN", "reviewer": "Reviewer_gdzy", "summary": "Prior work on minima stability for SGD considered linear stability in expectation using second order moment. This paper considers linear stability in higher moments, while providing necessary and sufficient conditions for it. Using these conditions a...
On Linear Stability of SGD and Input-Smoothness of Neural Networks Chao Ma Lexing Ying Department of Mathematics Department of Mathematics Stanford University Stanford, CA 94305 Lexing@stanford.cem Stanford University Stanford, CA 94305 chaoma@stanford.co Abstract The multiplicative structure of parameters and input da...
43,056
xz80iPFIjvG
neurips
2,021
main
NeurIPS.cc/2021/Conference
2,247
On the Algorithmic Stability of Adversarial Training
The adversarial training is a popular tool to remedy the vulnerability of deep learning models against adversarial attacks, and there is rich theoretical literature on the training loss of adversarial training algorithms. In contrast, this paper studies the algorithmic stability of a generic adversarial training algori...
[ "Yue Xing", "Qifan Song", "Guang Cheng" ]
[ "Algorithmic Stability", "Generalization" ]
NeurIPS 2021 Poster
Accept (Poster)
The large generalization gap of adversarial training is a central problem of adversarial robustness. This paper makes the first attempt to study this problem from an algorithmic stability point of view. The authors establish upper and lower bounds for robust accuracy of adversarial training and investigate the causes o...
4
[{"review_id": "qieVKZCJ9wr", "reviewer": "Reviewer_Sgsz", "summary": "This work builds upon Bassily et al. and extends stability analysis to adversarial training algorithms. The authors use uniform algorithmic stability to derive lower and upper bounds on the robust accuracy. Based on the theoretical analysis, they a...
On the Algorithmic Stability of Adversarial Training Yue Xing Department of Statistics Purdue University xing4 90purdue .ddu edu Qifan Song Department of Statisties Purdue University gfsong@purdue. edu Guang Cheng Department Statistics Purdue University chenggg@purdue. edu Abstract The adversarial training is popular t...
42,452
xZvuqfT6Otj
neurips
2,021
main
NeurIPS.cc/2021/Conference
6,216
Fixes That Fail: Self-Defeating Improvements in Machine-Learning Systems
Machine-learning systems such as self-driving cars or virtual assistants are composed of a large number of machine-learning models that recognize image content, transcribe speech, analyze natural language, infer preferences, rank options, etc. Models in these systems are often developed and trained independently, which...
[ "Ruihan Wu", "Chuan Guo", "Awni Hannun", "Laurens van der Maaten" ]
[ "System-level Evaluation", "Model Compatibility", "Error Decomposition" ]
NeurIPS 2021 Poster
Accept (Poster)
Three reviewers gave favorable scores, one borderline, and one negative (ARBJ). The last reviewer engaged in a productive discussion with the authors, so we can expect the final paper to be improved. The paper addresses a important question and has been refined a lot since it was first submitted to ICML 2021, so it des...
5
[{"review_id": "lISCOhlOKmr", "reviewer": "Reviewer_RhPc", "summary": "This paper explores, \"self-defeating improvements, potential negative\neffects of improving a part of a module machine learning system. The\ncauses of such self-defeating improvements are analyzed through Bayes\nerror decomposition. The paper prese...
Fixes That Fail: Self-Defeating Improvements in Machine-Learning Systems Ruihan Wu" Cornell University rv565@corne11.co edu Chuan Guo Awni Hannun Laurens van der Maaten Facebook AI Research {chuanguo, awmi, wawmiiilidaaaeaaaaaa 1vdmaaten)Offb... com Abstract Machine-learning systems such as self-driving cars or virtual...
44,336
x_sdq4ZYSOl
neurips
2,021
main
NeurIPS.cc/2021/Conference
7,361
Improved Regret Bounds for Tracking Experts with Memory
We address the problem of sequential prediction with expert advice in a non-stationary environment with long-term memory guarantees in the sense of Bousquet and Warmuth [4]. We give a linear-time algorithm that improves on the best known regret bound [27]. This algorithm incorporates a relative entropy projection step....
[ "James Robinson", "Mark Herbster" ]
[ "online learning", "prediction with expert advice" ]
NeurIPS 2021 Poster
Accept (Poster)
Following our discussion, the reviewers and I all agree that this is a well-written paper that makes a solid contribution (albeit on a somewhat niche problem) and the techniques are interesting and would be of interest to the community. I encourage the authors to further clarify the relation to prior work in their fin...
3
[{"review_id": "GrVrbnWq9Q", "reviewer": "Reviewer_ETi1", "summary": "The setting in the paper is the one of prediction with expert advice in a non-stationary environment - the learner competes against a sequence of switching experts [This setting was introduced by Bousquet and Warmuth, 2002].\nThe authors propose an a...
Improved Regret Bounds for Tracking Experts with Memory James Robinson Mark Herbster Department of Computer Science University College London London Department of Computer Science University College London London United Kingdom United Kingdom robinson@cs..... ucl. ac .c.uk .uk www.seerrrrrss...mmm ucl ac uk Abstract We...
41,779