abs
stringlengths
45
62
Download PDF
stringlengths
50
84
OpenReview
stringlengths
42
42
title
stringlengths
10
168
url
stringlengths
45
62
authors
stringlengths
9
704
detail_url
stringlengths
45
62
tags
stringclasses
1 value
abstract
stringlengths
415
5.03k
https://proceedings.mlr.press/v202/chen23ae.html
https://proceedings.mlr.press/v202/chen23ae/chen23ae.pdf
https://openreview.net/forum?id=9afHiUivJR
Lower Bounds for Learning in Revealing POMDPs
https://proceedings.mlr.press/v202/chen23ae.html
Fan Chen, Huan Wang, Caiming Xiong, Song Mei, Yu Bai
https://proceedings.mlr.press/v202/chen23ae.html
ICML 2023
This paper studies the fundamental limits of reinforcement learning (RL) in the challenging partially observable setting. While it is well-established that learning in Partially Observable Markov Decision Processes (POMDPs) requires exponentially many samples in the worst case, a surge of recent work shows that polynom...
https://proceedings.mlr.press/v202/chen23af.html
https://proceedings.mlr.press/v202/chen23af/chen23af.pdf
https://openreview.net/forum?id=7BO6rpA6qQ
Implicit Neural Spatial Representations for Time-dependent PDEs
https://proceedings.mlr.press/v202/chen23af.html
Honglin Chen, Rundi Wu, Eitan Grinspun, Changxi Zheng, Peter Yichen Chen
https://proceedings.mlr.press/v202/chen23af.html
ICML 2023
Implicit Neural Spatial Representation (INSR) has emerged as an effective representation of spatially-dependent vector fields. This work explores solving time-dependent PDEs with INSR. Classical PDE solvers introduce both temporal and spatial discretizations. Common spatial discretizations include meshes and meshless p...
https://proceedings.mlr.press/v202/chen23ag.html
https://proceedings.mlr.press/v202/chen23ag/chen23ag.pdf
https://openreview.net/forum?id=Fj0PRtd4e6
BEATs: Audio Pre-Training with Acoustic Tokenizers
https://proceedings.mlr.press/v202/chen23ag.html
Sanyuan Chen, Yu Wu, Chengyi Wang, Shujie Liu, Daniel Tompkins, Zhuo Chen, Wanxiang Che, Xiangzhan Yu, Furu Wei
https://proceedings.mlr.press/v202/chen23ag.html
ICML 2023
We introduce a self-supervised learning (SSL) framework BEATs for general audio representation pre-training, where we optimize an acoustic tokenizer and an audio SSL model by iterations. Unlike the previous audio SSL models that employ reconstruction loss for pre-training, our audio SSL model is trained with the discre...
https://proceedings.mlr.press/v202/chen23ah.html
https://proceedings.mlr.press/v202/chen23ah/chen23ah.pdf
https://openreview.net/forum?id=xDIppoiFrA
Learning to Incentivize Information Acquisition: Proper Scoring Rules Meet Principal-Agent Model
https://proceedings.mlr.press/v202/chen23ah.html
Siyu Chen, Jibang Wu, Yifan Wu, Zhuoran Yang
https://proceedings.mlr.press/v202/chen23ah.html
ICML 2023
We study the incentivized information acquisition problem, where a principal hires an agent to gather information on her behalf. Such a problem is modeled as a Stackelberg game between the principal and the agent, where the principal announces a scoring rule that specifies the payment, and then the agent then chooses a...
https://proceedings.mlr.press/v202/chen23ai.html
https://proceedings.mlr.press/v202/chen23ai/chen23ai.pdf
https://openreview.net/forum?id=VqnEAUnfvu
Faster Gradient-Free Algorithms for Nonsmooth Nonconvex Stochastic Optimization
https://proceedings.mlr.press/v202/chen23ai.html
Lesi Chen, Jing Xu, Luo Luo
https://proceedings.mlr.press/v202/chen23ai.html
ICML 2023
We consider the optimization problem of the form $\min_{x \in \mathbb{R}^d} f(x) \triangleq \mathbb{E}[F(x;\xi)]$ , where the component $F(x;\xi)$ is $L$-mean-squared Lipschitz but possibly nonconvex and nonsmooth.The recently proposed gradient-free method requires at most $\mathcal{O}( L^4 d^{3/2} \epsilon^{-4} + \Del...
https://proceedings.mlr.press/v202/chen23aj.html
https://proceedings.mlr.press/v202/chen23aj/chen23aj.pdf
https://openreview.net/forum?id=ieSN7Xyo8g
Efficient Personalized Federated Learning via Sparse Model-Adaptation
https://proceedings.mlr.press/v202/chen23aj.html
Daoyuan Chen, Liuyi Yao, Dawei Gao, Bolin Ding, Yaliang Li
https://proceedings.mlr.press/v202/chen23aj.html
ICML 2023
Federated Learning (FL) aims to train machine learning models for multiple clients without sharing their own private data. Due to the heterogeneity of clients’ local data distribution, recent studies explore the personalized FL that learns and deploys distinct local models with the help of auxiliary global models. Howe...
https://proceedings.mlr.press/v202/chen23ak.html
https://proceedings.mlr.press/v202/chen23ak/chen23ak.pdf
https://openreview.net/forum?id=8lCz8flXkr
A Gromov-Wasserstein Geometric View of Spectrum-Preserving Graph Coarsening
https://proceedings.mlr.press/v202/chen23ak.html
Yifan Chen, Rentian Yao, Yun Yang, Jie Chen
https://proceedings.mlr.press/v202/chen23ak.html
ICML 2023
Graph coarsening is a technique for solving large-scale graph problems by working on a smaller version of the original graph, and possibly interpolating the results back to the original graph. It has a long history in scientific computing and has recently gained popularity in machine learning, particularly in methods t...
https://proceedings.mlr.press/v202/chen23al.html
https://proceedings.mlr.press/v202/chen23al/chen23al.pdf
https://openreview.net/forum?id=fB9YIfI6WQ
How to address monotonicity for model risk management?
https://proceedings.mlr.press/v202/chen23al.html
Dangxing Chen, Weicheng Ye
https://proceedings.mlr.press/v202/chen23al.html
ICML 2023
In this paper, we study the problem of establishing the accountability and fairness of transparent machine learning models through monotonicity. Although there have been numerous studies on individual monotonicity, pairwise monotonicity is often overlooked in the existing literature. This paper studies transparent neur...
https://proceedings.mlr.press/v202/chen23am.html
https://proceedings.mlr.press/v202/chen23am/chen23am.pdf
https://openreview.net/forum?id=WmAfdffvfe
Sketched Ridgeless Linear Regression: The Role of Downsampling
https://proceedings.mlr.press/v202/chen23am.html
Xin Chen, Yicheng Zeng, Siyue Yang, Qiang Sun
https://proceedings.mlr.press/v202/chen23am.html
ICML 2023
Overparametrization often helps improve the generalization performance. This paper presents a dual view of overparametrization suggesting that downsampling may also help generalize. Focusing on the proportional regime $m\asymp n \asymp p$, where $m$ represents the sketching size, $n$ is the sample size, and $p$ is the ...
https://proceedings.mlr.press/v202/chen23an.html
https://proceedings.mlr.press/v202/chen23an/chen23an.pdf
https://openreview.net/forum?id=pLQoqbUTue
Context-Aware Bayesian Network Actor-Critic Methods for Cooperative Multi-Agent Reinforcement Learning
https://proceedings.mlr.press/v202/chen23an.html
Dingyang Chen, Qi Zhang
https://proceedings.mlr.press/v202/chen23an.html
ICML 2023
Executing actions in a correlated manner is a common strategy for human coordination that often leads to better cooperation, which is also potentially beneficial for cooperative multi-agent reinforcement learning (MARL). However, the recent success of MARL relies heavily on the convenient paradigm of purely decentraliz...
https://proceedings.mlr.press/v202/chen23ao.html
https://proceedings.mlr.press/v202/chen23ao/chen23ao.pdf
https://openreview.net/forum?id=CUORPu6abU
Bidirectional Learning for Offline Model-based Biological Sequence Design
https://proceedings.mlr.press/v202/chen23ao.html
Can Chen, Yingxue Zhang, Xue Liu, Mark Coates
https://proceedings.mlr.press/v202/chen23ao.html
ICML 2023
Offline model-based optimization aims to maximize a black-box objective function with a static dataset of designs and their scores. In this paper, we focus on biological sequence design to maximize some sequence score. A recent approach employs bidirectional learning, combining a forward mapping for exploitation and a ...
https://proceedings.mlr.press/v202/chen23ap.html
https://proceedings.mlr.press/v202/chen23ap/chen23ap.pdf
https://openreview.net/forum?id=Uj5AVsHkoX
Learning to Jump: Thinning and Thickening Latent Counts for Generative Modeling
https://proceedings.mlr.press/v202/chen23ap.html
Tianqi Chen, Mingyuan Zhou
https://proceedings.mlr.press/v202/chen23ap.html
ICML 2023
Learning to denoise has emerged as a prominent paradigm to design state-of-the-art deep generative models for natural images. How to use it to model the distributions of both continuous real-valued data and categorical data has been well studied in recently proposed diffusion models. However, it is found in this paper ...
https://proceedings.mlr.press/v202/chen23aq.html
https://proceedings.mlr.press/v202/chen23aq/chen23aq.pdf
https://openreview.net/forum?id=Q4QFG5Fe4O
Lifelong Language Pretraining with Distribution-Specialized Experts
https://proceedings.mlr.press/v202/chen23aq.html
Wuyang Chen, Yanqi Zhou, Nan Du, Yanping Huang, James Laudon, Zhifeng Chen, Claire Cui
https://proceedings.mlr.press/v202/chen23aq.html
ICML 2023
Pretraining on a large-scale corpus has become a standard method to build general language models (LMs). Adapting a model to new data distributions targeting different downstream tasks poses significant challenges. Naive fine-tuning may incur catastrophic forgetting when the over-parameterized LMs overfit the new data ...
https://proceedings.mlr.press/v202/chen23ar.html
https://proceedings.mlr.press/v202/chen23ar/chen23ar.pdf
https://openreview.net/forum?id=v50lXuFMW0
Generalized-Smooth Nonconvex Optimization is As Efficient As Smooth Nonconvex Optimization
https://proceedings.mlr.press/v202/chen23ar.html
Ziyi Chen, Yi Zhou, Yingbin Liang, Zhaosong Lu
https://proceedings.mlr.press/v202/chen23ar.html
ICML 2023
Various optimal gradient-based algorithms have been developed for smooth nonconvex optimization. However, many nonconvex machine learning problems do not belong to the class of smooth functions and therefore the existing algorithms are sub-optimal. Instead, these problems have been shown to satisfy certain generalized-...
https://proceedings.mlr.press/v202/cheng23a.html
https://proceedings.mlr.press/v202/cheng23a/cheng23a.pdf
https://openreview.net/forum?id=O2XerBwfFk
Weakly Supervised Regression with Interval Targets
https://proceedings.mlr.press/v202/cheng23a.html
Xin Cheng, Yuzhou Cao, Ximing Li, Bo An, Lei Feng
https://proceedings.mlr.press/v202/cheng23a.html
ICML 2023
This paper investigates an interesting weakly supervised regression setting called regression with interval targets (RIT). Although some of the previous methods on relevant regression settings can be adapted to RIT, they are not statistically consistent, and thus their empirical performance is not guaranteed. In this p...
https://proceedings.mlr.press/v202/cheng23b.html
https://proceedings.mlr.press/v202/cheng23b/cheng23b.pdf
https://openreview.net/forum?id=2jvwyTm6Pk
PLay: Parametrically Conditioned Layout Generation using Latent Diffusion
https://proceedings.mlr.press/v202/cheng23b.html
Chin-Yi Cheng, Forrest Huang, Gang Li, Yang Li
https://proceedings.mlr.press/v202/cheng23b.html
ICML 2023
Layout design is an important task in various design fields, including user interfaces, document, and graphic design. As this task requires tedious manual effort by designers, prior works have attempted to automate this process using generative models, but commonly fell short of providing intuitive user controls and ac...
https://proceedings.mlr.press/v202/cheng23c.html
https://proceedings.mlr.press/v202/cheng23c/cheng23c.pdf
https://openreview.net/forum?id=HBrQI0tX8F
Identification of the Adversary from a Single Adversarial Example
https://proceedings.mlr.press/v202/cheng23c.html
Minhao Cheng, Rui Min, Haochen Sun, Pin-Yu Chen
https://proceedings.mlr.press/v202/cheng23c.html
ICML 2023
Deep neural networks have been shown vulnerable to adversarial examples. Even though many defense methods have been proposed to enhance the robustness, it is still a long way toward providing an attack-free method to build a trustworthy machine learning system. In this paper, instead of enhancing the robustness, we tak...
https://proceedings.mlr.press/v202/cheng23d.html
https://proceedings.mlr.press/v202/cheng23d/cheng23d.pdf
https://openreview.net/forum?id=pLky79p1Ne
Parallel Online Clustering of Bandits via Hedonic Game
https://proceedings.mlr.press/v202/cheng23d.html
Xiaotong Cheng, Cheng Pan, Setareh Maghsudi
https://proceedings.mlr.press/v202/cheng23d.html
ICML 2023
Contextual bandit algorithms appear in several applications, such as online advertisement and recommendation systems like personalized education or personalized medicine. Individually-tailored recommendations boost the performance of the underlying application; nevertheless, providing individual suggestions becomes cos...
https://proceedings.mlr.press/v202/cheng23e.html
https://proceedings.mlr.press/v202/cheng23e/cheng23e.pdf
https://openreview.net/forum?id=eIQIcUKs0T
Mu$^2$SLAM: Multitask, Multilingual Speech and Language Models
https://proceedings.mlr.press/v202/cheng23e.html
Yong Cheng, Yu Zhang, Melvin Johnson, Wolfgang Macherey, Ankur Bapna
https://proceedings.mlr.press/v202/cheng23e.html
ICML 2023
We present Mu$^2$SLAM, a multilingual sequence-to-sequence model pre-trained jointly on unlabeled speech, unlabeled text and supervised data spanning Automatic Speech Recognition (ASR), Automatic Speech Translation (AST) and Machine Translation (MT), in over 100 languages. By leveraging a quantized representation of sp...
https://proceedings.mlr.press/v202/cheng23f.html
https://proceedings.mlr.press/v202/cheng23f/cheng23f.pdf
https://openreview.net/forum?id=haYpY2kDAb
Understanding the Role of Feedback in Online Learning with Switching Costs
https://proceedings.mlr.press/v202/cheng23f.html
Duo Cheng, Xingyu Zhou, Bo Ji
https://proceedings.mlr.press/v202/cheng23f.html
ICML 2023
In this paper, we study the role of feedback in online learning with switching costs. It has been shown that the minimax regret is $\widetilde{\Theta}(T^{2/3})$ under bandit feedback and improves to $\widetilde{\Theta}(\sqrt{T})$ under full-information feedback, where $T$ is the length of the time horizon. However, it ...
https://proceedings.mlr.press/v202/chiang23a.html
https://proceedings.mlr.press/v202/chiang23a/chiang23a.pdf
https://openreview.net/forum?id=XKcogevHj8
Tighter Bounds on the Expressivity of Transformer Encoders
https://proceedings.mlr.press/v202/chiang23a.html
David Chiang, Peter Cholak, Anand Pillay
https://proceedings.mlr.press/v202/chiang23a.html
ICML 2023
Characterizing neural networks in terms of better-understood formal systems has the potential to yield new insights into the power and limitations of these networks. Doing so for transformers remains an active area of research. Bhattamishra and others have shown that transformer encoders are at least as expressive as a...
https://proceedings.mlr.press/v202/chidambaram23a.html
https://proceedings.mlr.press/v202/chidambaram23a/chidambaram23a.pdf
https://openreview.net/forum?id=TXGh5DI3FP
Provably Learning Diverse Features in Multi-View Data with Midpoint Mixup
https://proceedings.mlr.press/v202/chidambaram23a.html
Muthu Chidambaram, Xiang Wang, Chenwei Wu, Rong Ge
https://proceedings.mlr.press/v202/chidambaram23a.html
ICML 2023
Mixup is a data augmentation technique that relies on training using random convex combinations of data points and their labels. In recent years, Mixup has become a standard primitive used in the training of state-of-the-art image classification models due to its demonstrated benefits over empirical risk minimization w...
https://proceedings.mlr.press/v202/chidambaram23b.html
https://proceedings.mlr.press/v202/chidambaram23b/chidambaram23b.pdf
https://openreview.net/forum?id=TZvDKSg6im
Hiding Data Helps: On the Benefits of Masking for Sparse Coding
https://proceedings.mlr.press/v202/chidambaram23b.html
Muthu Chidambaram, Chenwei Wu, Yu Cheng, Rong Ge
https://proceedings.mlr.press/v202/chidambaram23b.html
ICML 2023
Sparse coding, which refers to modeling a signal as sparse linear combinations of the elements of a learned dictionary, has proven to be a successful (and interpretable) approach in applications such as signal processing, computer vision, and medical imaging. While this success has spurred much work on provable guarant...
https://proceedings.mlr.press/v202/chien23a.html
https://proceedings.mlr.press/v202/chien23a/chien23a.pdf
https://openreview.net/forum?id=ltFbrFDbld
PINA: Leveraging Side Information in eXtreme Multi-label Classification via Predicted Instance Neighborhood Aggregation
https://proceedings.mlr.press/v202/chien23a.html
Eli Chien, Jiong Zhang, Cho-Jui Hsieh, Jyun-Yu Jiang, Wei-Cheng Chang, Olgica Milenkovic, Hsiang-Fu Yu
https://proceedings.mlr.press/v202/chien23a.html
ICML 2023
The eXtreme Multi-label Classification (XMC) problem seeks to find relevant labels from an exceptionally large label space. Most of the existing XMC learners focus on the extraction of semantic features from input query text. However, conventional XMC studies usually neglect the side information of instances and labels...
https://proceedings.mlr.press/v202/chiu23a.html
https://proceedings.mlr.press/v202/chiu23a/chiu23a.pdf
https://openreview.net/forum?id=SwWLzvsURq
Tight Certification of Adversarially Trained Neural Networks via Nonconvex Low-Rank Semidefinite Relaxations
https://proceedings.mlr.press/v202/chiu23a.html
Hong-Ming Chiu, Richard Y. Zhang
https://proceedings.mlr.press/v202/chiu23a.html
ICML 2023
Adversarial training is well-known to produce high-quality neural network models that are empirically robust against adversarial perturbations. Nevertheless, once a model has been adversarially trained, one often desires a certification that the model is truly robust against all future attacks. Unfortunately, when face...
https://proceedings.mlr.press/v202/cho23a.html
https://proceedings.mlr.press/v202/cho23a/cho23a.pdf
https://openreview.net/forum?id=S2hcTJB6fb
Neural Latent Aligner: Cross-trial Alignment for Learning Representations of Complex, Naturalistic Neural Data
https://proceedings.mlr.press/v202/cho23a.html
Cheol Jun Cho, Edward Chang, Gopala Anumanchipalli
https://proceedings.mlr.press/v202/cho23a.html
ICML 2023
Understanding the neural implementation of complex human behaviors is one of the major goals in neuroscience. To this end, it is crucial to find a true representation of the neural data, which is challenging due to the high complexity of behaviors and the low signal-to-ratio (SNR) of the signals. Here, we propose a nov...
https://proceedings.mlr.press/v202/cho23b.html
https://proceedings.mlr.press/v202/cho23b/cho23b.pdf
https://openreview.net/forum?id=d8LTNXt97w
On the Convergence of Federated Averaging with Cyclic Client Participation
https://proceedings.mlr.press/v202/cho23b.html
Yae Jee Cho, Pranay Sharma, Gauri Joshi, Zheng Xu, Satyen Kale, Tong Zhang
https://proceedings.mlr.press/v202/cho23b.html
ICML 2023
Federated Averaging (FedAvg) and its variants are the most popular optimization algorithms in federated learning (FL). Previous convergence analyses of FedAvg either assume full client participation or partial client participation where the clients can be uniformly sampled. However, in practical cross-device FL systems...
https://proceedings.mlr.press/v202/choi23a.html
https://proceedings.mlr.press/v202/choi23a/choi23a.pdf
https://openreview.net/forum?id=LMay53U4ke
GREAD: Graph Neural Reaction-Diffusion Networks
https://proceedings.mlr.press/v202/choi23a.html
Jeongwhan Choi, Seoyoung Hong, Noseong Park, Sung-Bae Cho
https://proceedings.mlr.press/v202/choi23a.html
ICML 2023
Graph neural networks (GNNs) are one of the most popular research topics for deep learning. GNN methods typically have been designed on top of the graph signal processing theory. In particular, diffusion equations have been widely used for designing the core processing layer of GNNs, and therefore they are inevitably v...
https://proceedings.mlr.press/v202/choi23b.html
https://proceedings.mlr.press/v202/choi23b/choi23b.pdf
https://openreview.net/forum?id=JuNIuHLm9y
Is Overfitting Necessary for Implicit Video Representation?
https://proceedings.mlr.press/v202/choi23b.html
Hee Min Choi, Hyoa Kang, Dokwan Oh
https://proceedings.mlr.press/v202/choi23b.html
ICML 2023
Compact representation of multimedia signals using implicit neural representations (INRs) has advanced significantly over the past few years, and recent works address their applications to video. Existing studies on video INR have focused on network architecture design as all video information is contained within netwo...
https://proceedings.mlr.press/v202/choi23c.html
https://proceedings.mlr.press/v202/choi23c/choi23c.pdf
https://openreview.net/forum?id=wkr4r2Cw3i
Semi-Parametric Contextual Pricing Algorithm using Cox Proportional Hazards Model
https://proceedings.mlr.press/v202/choi23c.html
Young-Geun Choi, Gi-Soo Kim, Yunseo Choi, Wooseong Cho, Myunghee Cho Paik, Min-Hwan Oh
https://proceedings.mlr.press/v202/choi23c.html
ICML 2023
Contextual dynamic pricing is a problem of setting prices based on current contextual information and previous sales history to maximize revenue. A popular approach is to postulate a distribution of customer valuation as a function of contextual information and the baseline valuation. A semi-parametric setting, where t...
https://proceedings.mlr.press/v202/choi23d.html
https://proceedings.mlr.press/v202/choi23d/choi23d.pdf
https://openreview.net/forum?id=YR0TzWNzD8
Restoration based Generative Models
https://proceedings.mlr.press/v202/choi23d.html
Jaemoo Choi, Yesom Park, Myungjoo Kang
https://proceedings.mlr.press/v202/choi23d.html
ICML 2023
Denoising diffusion models (DDMs) have recently attracted increasing attention by showing impressive synthesis quality. DDMs are built on a diffusion process that pushes data to the noise distribution and the models learn to denoise. In this paper, we establish the interpretation of DDMs in terms of image restoration (...
https://proceedings.mlr.press/v202/choi23e.html
https://proceedings.mlr.press/v202/choi23e/choi23e.pdf
https://openreview.net/forum?id=a33IYBCFey
Concept-based Explanations for Out-of-Distribution Detectors
https://proceedings.mlr.press/v202/choi23e.html
Jihye Choi, Jayaram Raghuram, Ryan Feng, Jiefeng Chen, Somesh Jha, Atul Prakash
https://proceedings.mlr.press/v202/choi23e.html
ICML 2023
Out-of-distribution (OOD) detection plays a crucial role in ensuring the safe deployment of deep neural network (DNN) classifiers. While a myriad of methods have focused on improving the performance of OOD detectors, a critical gap remains in interpreting their decisions. We help bridge this gap by providing explanatio...
https://proceedings.mlr.press/v202/choo23a.html
https://proceedings.mlr.press/v202/choo23a/choo23a.pdf
https://openreview.net/forum?id=u2Ap3vr5zQ
Active causal structure learning with advice
https://proceedings.mlr.press/v202/choo23a.html
Davin Choo, Themistoklis Gouleakis, Arnab Bhattacharyya
https://proceedings.mlr.press/v202/choo23a.html
ICML 2023
We introduce the problem of active causal structure learning with advice. In the typical well-studied setting, the learning algorithm is given the essential graph for the observational distribution and is asked to recover the underlying causal directed acyclic graph (DAG) $G^*$ while minimizing the number of interventi...
https://proceedings.mlr.press/v202/choo23b.html
https://proceedings.mlr.press/v202/choo23b/choo23b.pdf
https://openreview.net/forum?id=gnb9UUFqsc
New metrics and search algorithms for weighted causal DAGs
https://proceedings.mlr.press/v202/choo23b.html
Davin Choo, Kirankumar Shiragur
https://proceedings.mlr.press/v202/choo23b.html
ICML 2023
Recovering causal relationships from data is an important problem. Using observational data, one can typically only recover causal graphs up to a Markov equivalence class and additional assumptions or interventional data are needed for complete recovery. In this work, under some standard assumptions, we study causal gr...
https://proceedings.mlr.press/v202/chopin23a.html
https://proceedings.mlr.press/v202/chopin23a/chopin23a.pdf
https://openreview.net/forum?id=HafOgQ1zW2
Computational Doob h-transforms for Online Filtering of Discretely Observed Diffusions
https://proceedings.mlr.press/v202/chopin23a.html
Nicolas Chopin, Andras Fulop, Jeremy Heng, Alexandre H. Thiery
https://proceedings.mlr.press/v202/chopin23a.html
ICML 2023
This paper is concerned with online filtering of discretely observed nonlinear diffusion processes. Our approach is based on the fully adapted auxiliary particle filter, which involves Doob’s $h$-transforms that are typically intractable. We propose a computational framework to approximate these $h$-transforms by solvi...
https://proceedings.mlr.press/v202/choquette-choo23a.html
https://proceedings.mlr.press/v202/choquette-choo23a/choquette-choo23a.pdf
https://openreview.net/forum?id=ZVxT2ToHR5
Multi-Epoch Matrix Factorization Mechanisms for Private Machine Learning
https://proceedings.mlr.press/v202/choquette-choo23a.html
Christopher A. Choquette-Choo, Hugh Brendan Mcmahan, J Keith Rush, Abhradeep Guha Thakurta
https://proceedings.mlr.press/v202/choquette-choo23a.html
ICML 2023
We introduce new differentially private (DP) mechanisms for gradient-based machine learning (ML) with multiple passes (epochs) over a dataset, substantially improving the achievable privacy-utility-computation tradeoffs. We formalize the problem of DP mechanisms for adaptive streams with multiple participations and int...
https://proceedings.mlr.press/v202/choromanski23a.html
https://proceedings.mlr.press/v202/choromanski23a/choromanski23a.pdf
https://openreview.net/forum?id=H21qm4xyk9
Taming graph kernels with random features
https://proceedings.mlr.press/v202/choromanski23a.html
Krzysztof Marcin Choromanski
https://proceedings.mlr.press/v202/choromanski23a.html
ICML 2023
We introduce in this paper the mechanism of graph random features (GRFs). GRFs can be used to construct unbiased randomized estimators of several important kernels defined on graphs’ nodes, in particular the regularized Laplacian kernel. As regular RFs for non-graph kernels, they provide means to scale up kernel method...
https://proceedings.mlr.press/v202/choromanski23b.html
https://proceedings.mlr.press/v202/choromanski23b/choromanski23b.pdf
https://openreview.net/forum?id=Y5jGkbZ0W3
Efficient Graph Field Integrators Meet Point Clouds
https://proceedings.mlr.press/v202/choromanski23b.html
Krzysztof Marcin Choromanski, Arijit Sehanobish, Han Lin, Yunfan Zhao, Eli Berger, Tetiana Parshakova, Alvin Pan, David Watkins, Tianyi Zhang, Valerii Likhosherstov, Somnath Basu Roy Chowdhury, Kumar Avinava Dubey, Deepali Jain, Tamas Sarlos, Snigdha Chaturvedi, Adrian Weller
https://proceedings.mlr.press/v202/choromanski23b.html
ICML 2023
We present two new classes of algorithms for efficient field integration on graphs encoding point cloud data. The first class, $\mathrm{SeparatorFactorization}$ (SF), leverages the bounded genus of point cloud mesh graphs, while the second class, $\mathrm{RFDiffusion}$ (RFD), uses popular $\epsilon$-nearest-neighbor gr...
https://proceedings.mlr.press/v202/choshen23a.html
https://proceedings.mlr.press/v202/choshen23a/choshen23a.pdf
https://openreview.net/forum?id=EHgAM1xnWv
ContraBAR: Contrastive Bayes-Adaptive Deep RL
https://proceedings.mlr.press/v202/choshen23a.html
Era Choshen, Aviv Tamar
https://proceedings.mlr.press/v202/choshen23a.html
ICML 2023
In meta reinforcement learning (meta RL), an agent seeks a Bayes-optimal policy – the optimal policy when facing an unknown task that is sampled from some known task distribution. Previous approaches tackled this problem by inferring a $\textit{belief}$ over task parameters, using variational inference methods. Motivat...
https://proceedings.mlr.press/v202/chourasia23a.html
https://proceedings.mlr.press/v202/chourasia23a/chourasia23a.pdf
https://openreview.net/forum?id=aOU7OvlxeJ
Forget Unlearning: Towards True Data-Deletion in Machine Learning
https://proceedings.mlr.press/v202/chourasia23a.html
Rishav Chourasia, Neil Shah
https://proceedings.mlr.press/v202/chourasia23a.html
ICML 2023
Unlearning algorithms aim to remove deleted data’s influence from trained models at a cost lower than full retraining. However, prior guarantees of unlearning in literature are flawed and don’t protect the privacy of deleted records. We show that when people delete their data as a function of published models, records ...
https://proceedings.mlr.press/v202/chowdhury23a.html
https://proceedings.mlr.press/v202/chowdhury23a/chowdhury23a.pdf
https://openreview.net/forum?id=kNzaZ0jbIg
Patch-level Routing in Mixture-of-Experts is Provably Sample-efficient for Convolutional Neural Networks
https://proceedings.mlr.press/v202/chowdhury23a.html
Mohammed Nowaz Rabbani Chowdhury, Shuai Zhang, Meng Wang, Sijia Liu, Pin-Yu Chen
https://proceedings.mlr.press/v202/chowdhury23a.html
ICML 2023
In deep learning, mixture-of-experts (MoE) activates one or few experts (sub-networks) on a per-sample or per-token basis, resulting in significant computation reduction. The recently proposed patch-level routing in MoE (pMoE) divides each input into $n$ patches (or tokens) and sends $l$ patches ($l\ll n$) to each expe...
https://proceedings.mlr.press/v202/chowers23a.html
https://proceedings.mlr.press/v202/chowers23a/chowers23a.pdf
https://openreview.net/forum?id=RJGad2VFYk
What do CNNs Learn in the First Layer and Why? A Linear Systems Perspective
https://proceedings.mlr.press/v202/chowers23a.html
Rhea Chowers, Yair Weiss
https://proceedings.mlr.press/v202/chowers23a.html
ICML 2023
It has previously been reported that the representation that is learned in the first layer of deep Convolutional Neural Networks (CNNs) is highly consistent across initializations and architectures. In this work, we quantify this consistency by considering the first layer as a filter bank and measuring its energy distr...
https://proceedings.mlr.press/v202/christofidellis23a.html
https://proceedings.mlr.press/v202/christofidellis23a/christofidellis23a.pdf
https://openreview.net/forum?id=7TLjOO4cvm
Unifying Molecular and Textual Representations via Multi-task Language Modelling
https://proceedings.mlr.press/v202/christofidellis23a.html
Dimitrios Christofidellis, Giorgio Giannone, Jannis Born, Ole Winther, Teodoro Laino, Matteo Manica
https://proceedings.mlr.press/v202/christofidellis23a.html
ICML 2023
The recent advances in neural language models have also been successfully applied to the field of chemistry, offering generative solutions for classical problems in molecular design and synthesis planning. These new methods have the potential to fuel a new era of data-driven automation in scientific discovery. However,...
https://proceedings.mlr.press/v202/chu23a.html
https://proceedings.mlr.press/v202/chu23a/chu23a.pdf
https://openreview.net/forum?id=Z1I4WrV5TG
Wasserstein Barycenter Matching for Graph Size Generalization of Message Passing Neural Networks
https://proceedings.mlr.press/v202/chu23a.html
Xu Chu, Yujie Jin, Xin Wang, Shanghang Zhang, Yasha Wang, Wenwu Zhu, Hong Mei
https://proceedings.mlr.press/v202/chu23a.html
ICML 2023
Graph size generalization is hard for Message passing neural networks (MPNNs). The graph-level classification performance of MPNNs degrades across various graph sizes. Recently, theoretical studies reveal that a slow uncontrollable convergence rate w.r.t. graph size could adversely affect the size generalization. To ad...
https://proceedings.mlr.press/v202/chu23b.html
https://proceedings.mlr.press/v202/chu23b/chu23b.pdf
https://openreview.net/forum?id=IkSGn9fcPz
Shape-Guided Dual-Memory Learning for 3D Anomaly Detection
https://proceedings.mlr.press/v202/chu23b.html
Yu-Min Chu, Chieh Liu, Ting-I Hsieh, Hwann-Tzong Chen, Tyng-Luh Liu
https://proceedings.mlr.press/v202/chu23b.html
ICML 2023
We present a shape-guided expert-learning framework to tackle the problem of unsupervised 3D anomaly detection. Our method is established on the effectiveness of two specialized expert models and their synergy to localize anomalous regions from color and shape modalities. The first expert utilizes geometric information...
https://proceedings.mlr.press/v202/chu23c.html
https://proceedings.mlr.press/v202/chu23c/chu23c.pdf
https://openreview.net/forum?id=FQlsEvyQ4N
Multiply Robust Off-policy Evaluation and Learning under Truncation by Death
https://proceedings.mlr.press/v202/chu23c.html
Jianing Chu, Shu Yang, Wenbin Lu
https://proceedings.mlr.press/v202/chu23c.html
ICML 2023
Typical off-policy evaluation (OPE) and off-policy learning (OPL) are not well-defined problems under "truncation by death", where the outcome of interest is not defined after some events, such as death. The standard OPE no longer yields consistent estimators, and the standard OPL results in suboptimal policies. In thi...
https://proceedings.mlr.press/v202/chuang23a.html
https://proceedings.mlr.press/v202/chuang23a/chuang23a.pdf
https://openreview.net/forum?id=m2BVUzNzKJ
InfoOT: Information Maximizing Optimal Transport
https://proceedings.mlr.press/v202/chuang23a.html
Ching-Yao Chuang, Stefanie Jegelka, David Alvarez-Melis
https://proceedings.mlr.press/v202/chuang23a.html
ICML 2023
Optimal transport aligns samples across distributions by minimizing the transportation cost between them, e.g., the geometric distances. Yet, it ignores coherence structure in the data such as clusters, does not handle outliers well, and cannot integrate new data points. To address these drawbacks, we propose InfoOT, a...
https://proceedings.mlr.press/v202/chughtai23a.html
https://proceedings.mlr.press/v202/chughtai23a/chughtai23a.pdf
https://openreview.net/forum?id=jCOrkuUpss
A Toy Model of Universality: Reverse Engineering how Networks Learn Group Operations
https://proceedings.mlr.press/v202/chughtai23a.html
Bilal Chughtai, Lawrence Chan, Neel Nanda
https://proceedings.mlr.press/v202/chughtai23a.html
ICML 2023
Universality is a key hypothesis in mechanistic interpretability – that different models learn similar features and circuits when trained on similar tasks. In this work, we study the universality hypothesis by examining how small networks learn to implement group compositions. We present a novel algorithm by which neur...
https://proceedings.mlr.press/v202/clarkson23a.html
https://proceedings.mlr.press/v202/clarkson23a/clarkson23a.pdf
https://openreview.net/forum?id=1e80ooimrm
Distribution Free Prediction Sets for Node Classification
https://proceedings.mlr.press/v202/clarkson23a.html
Jase Clarkson
https://proceedings.mlr.press/v202/clarkson23a.html
ICML 2023
Graph Neural Networks (GNNs) are able to achieve high classification accuracy on many important real world datasets, but provide no rigorous notion of predictive uncertainty. Quantifying the confidence of GNN models is difficult due to the dependence between datapoints induced by the graph structure. We leverage recent...
https://proceedings.mlr.press/v202/cohen23a.html
https://proceedings.mlr.press/v202/cohen23a/cohen23a.pdf
https://openreview.net/forum?id=uBuWtVGF3h
Sequential Strategic Screening
https://proceedings.mlr.press/v202/cohen23a.html
Lee Cohen, Saeed Sharifi -Malvajerdi, Kevin Stangl, Ali Vakilian, Juba Ziani
https://proceedings.mlr.press/v202/cohen23a.html
ICML 2023
We initiate the study of strategic behavior in screening processes with multiple classifiers. We focus on two contrasting settings: a "conjunctive” setting in which an individual must satisfy all classifiers simultaneously, and a sequential setting in which an individual to succeed must satisfy classifiers one at a tim...
https://proceedings.mlr.press/v202/cohen23b.html
https://proceedings.mlr.press/v202/cohen23b/cohen23b.pdf
https://openreview.net/forum?id=0lufU7dRWA
Few-Sample Feature Selection via Feature Manifold Learning
https://proceedings.mlr.press/v202/cohen23b.html
David Cohen, Tal Shnitzer, Yuval Kluger, Ronen Talmon
https://proceedings.mlr.press/v202/cohen23b.html
ICML 2023
In this paper, we present a new method for few-sample supervised feature selection (FS). Our method first learns the manifold of the feature space of each class using kernels capturing multi-feature associations. Then, based on Riemannian geometry, a composite kernel is computed, extracting the differences between the ...
https://proceedings.mlr.press/v202/cole23a.html
https://proceedings.mlr.press/v202/cole23a/cole23a.pdf
https://openreview.net/forum?id=C6IDiP5Or9
Spatial Implicit Neural Representations for Global-Scale Species Mapping
https://proceedings.mlr.press/v202/cole23a.html
Elijah Cole, Grant Van Horn, Christian Lange, Alexander Shepard, Patrick Leary, Pietro Perona, Scott Loarie, Oisin Mac Aodha
https://proceedings.mlr.press/v202/cole23a.html
ICML 2023
Estimating the geographical range of a species from sparse observations is a challenging and important geospatial prediction problem. Given a set of locations where a species has been observed, the goal is to build a model to predict whether the species is present or absent at any location. This problem has a long hist...
https://proceedings.mlr.press/v202/coletta23a.html
https://proceedings.mlr.press/v202/coletta23a/coletta23a.pdf
https://openreview.net/forum?id=1s3P1SjAsF
K-SHAP: Policy Clustering Algorithm for Anonymous Multi-Agent State-Action Pairs
https://proceedings.mlr.press/v202/coletta23a.html
Andrea Coletta, Svitlana Vyetrenko, Tucker Balch
https://proceedings.mlr.press/v202/coletta23a.html
ICML 2023
Learning agent behaviors from observational data has shown to improve our understanding of their decision-making processes, advancing our ability to explain their interactions with the environment and other agents. While multiple learning techniques have been proposed in the literature, there is one particular setting ...
https://proceedings.mlr.press/v202/comas23a.html
https://proceedings.mlr.press/v202/comas23a/comas23a.pdf
https://openreview.net/forum?id=Iwt7oI9cNb
Inferring Relational Potentials in Interacting Systems
https://proceedings.mlr.press/v202/comas23a.html
Armand Comas, Yilun Du, Christian Fernandez Lopez, Sandesh Ghimire, Mario Sznaier, Joshua B. Tenenbaum, Octavia Camps
https://proceedings.mlr.press/v202/comas23a.html
ICML 2023
Systems consisting of interacting agents are prevalent in the world, ranging from dynamical systems in physics to complex biological networks. To build systems which can interact robustly in the real world, it is thus important to be able to infer the precise interactions governing such systems. Existing approaches typ...
https://proceedings.mlr.press/v202/connolly23a.html
https://proceedings.mlr.press/v202/connolly23a/connolly23a.pdf
https://openreview.net/forum?id=jawDXfCldp
Task-specific experimental design for treatment effect estimation
https://proceedings.mlr.press/v202/connolly23a.html
Bethany Connolly, Kim Moore, Tobias Schwedes, Alexander Adam, Gary Willis, Ilya Feige, Christopher Frye
https://proceedings.mlr.press/v202/connolly23a.html
ICML 2023
Understanding causality should be a core requirement of any attempt to build real impact through AI. Due to the inherent unobservability of counterfactuals, large randomised trials (RCTs) are the standard for causal inference. But large experiments are generically expensive, and randomisation carries its own costs, e.g...
https://proceedings.mlr.press/v202/cornacchia23a.html
https://proceedings.mlr.press/v202/cornacchia23a/cornacchia23a.pdf
https://openreview.net/forum?id=EgRfH4jeTL
A Mathematical Model for Curriculum Learning for Parities
https://proceedings.mlr.press/v202/cornacchia23a.html
Elisabetta Cornacchia, Elchanan Mossel
https://proceedings.mlr.press/v202/cornacchia23a.html
ICML 2023
Curriculum learning (CL)- training using samples that are generated and presented in a meaningful order - was introduced in the machine learning context around a decade ago. While CL has been extensively used and analysed empirically, there has been very little mathematical justification for its advantages. We introduc...
https://proceedings.mlr.press/v202/covert23a.html
https://proceedings.mlr.press/v202/covert23a/covert23a.pdf
https://openreview.net/forum?id=dOaCuOsdmb
Learning to Maximize Mutual Information for Dynamic Feature Selection
https://proceedings.mlr.press/v202/covert23a.html
Ian Connick Covert, Wei Qiu, Mingyu Lu, Na Yoon Kim, Nathan J White, Su-In Lee
https://proceedings.mlr.press/v202/covert23a.html
ICML 2023
Feature selection helps reduce data acquisition costs in ML, but the standard approach is to train models with static feature subsets. Here, we consider the dynamic feature selection (DFS) problem where a model sequentially queries features based on the presently available information. DFS is often addressed with reinf...
https://proceedings.mlr.press/v202/cui23a.html
https://proceedings.mlr.press/v202/cui23a/cui23a.pdf
https://openreview.net/forum?id=3Loamzk5Fm
Rethinking Weak Supervision in Helping Contrastive Learning
https://proceedings.mlr.press/v202/cui23a.html
Jingyi Cui, Weiran Huang, Yifei Wang, Yisen Wang
https://proceedings.mlr.press/v202/cui23a.html
ICML 2023
Contrastive learning has shown outstanding performances in both supervised and unsupervised learning, and has recently been introduced to solve weakly supervised learning problems such as semi-supervised learning and noisy label learning. Despite the empirical evidence showing that semi-supervised labels improve the re...
https://proceedings.mlr.press/v202/cui23b.html
https://proceedings.mlr.press/v202/cui23b/cui23b.pdf
https://openreview.net/forum?id=CXkJh2ITml
Bayes-optimal Learning of Deep Random Networks of Extensive-width
https://proceedings.mlr.press/v202/cui23b.html
Hugo Cui, Florent Krzakala, Lenka Zdeborova
https://proceedings.mlr.press/v202/cui23b.html
ICML 2023
We consider the problem of learning a target function corresponding to a deep, extensive-width, non-linear neural network with random Gaussian weights. We consider the asymptotic limit where the number of samples, the input dimension and the network width are proportionally large and propose a closed-form expression fo...
https://proceedings.mlr.press/v202/cui23c.html
https://proceedings.mlr.press/v202/cui23c/cui23c.pdf
https://openreview.net/forum?id=2rNiCN94NY
A General Representation Learning Framework with Generalization Performance Guarantees
https://proceedings.mlr.press/v202/cui23c.html
Junbiao Cui, Jianqing Liang, Qin Yue, Jiye Liang
https://proceedings.mlr.press/v202/cui23c.html
ICML 2023
The generalization performance of machine learning methods depends heavily on the quality of data representation. However, existing researches rarely consider representation learning from the perspective of generalization error. In this paper, we prove that generalization error of representation learning function can b...
https://proceedings.mlr.press/v202/cui23d.html
https://proceedings.mlr.press/v202/cui23d/cui23d.pdf
https://openreview.net/forum?id=MZkbgahv4a
IRNeXt: Rethinking Convolutional Network Design for Image Restoration
https://proceedings.mlr.press/v202/cui23d.html
Yuning Cui, Wenqi Ren, Sining Yang, Xiaochun Cao, Alois Knoll
https://proceedings.mlr.press/v202/cui23d.html
ICML 2023
We present IRNeXt, a simple yet effective convolutional network architecture for image restoration. Recently, Transformer models have dominated the field of image restoration due to the powerful ability of modeling long-range pixels interactions. In this paper, we excavate the potential of the convolutional neural netw...
https://proceedings.mlr.press/v202/cui23e.html
https://proceedings.mlr.press/v202/cui23e/cui23e.pdf
https://openreview.net/forum?id=ccwSdYv1GI
Scaling Up Dataset Distillation to ImageNet-1K with Constant Memory
https://proceedings.mlr.press/v202/cui23e.html
Justin Cui, Ruochen Wang, Si Si, Cho-Jui Hsieh
https://proceedings.mlr.press/v202/cui23e.html
ICML 2023
Dataset Distillation is a newly emerging area that aims to distill large datasets into much smaller and highly informative synthetic ones to accelerate training and reduce storage. Among various dataset distillation methods, trajectory-matching-based methods (MTT) have achieved SOTA performance in many tasks, e.g., on ...
https://proceedings.mlr.press/v202/cui23f.html
https://proceedings.mlr.press/v202/cui23f/cui23f.pdf
https://openreview.net/forum?id=SAgKrtDkn1
Learning Dynamic Query Combinations for Transformer-based Object Detection and Segmentation
https://proceedings.mlr.press/v202/cui23f.html
Yiming Cui, Linjie Yang, Haichao Yu
https://proceedings.mlr.press/v202/cui23f.html
ICML 2023
Transformer-based detection and segmentation methods use a list of learned detection queries to retrieve information from the transformer network and learn to predict the location and category of one specific object from each query. We empirically find that random convex combinations of the learned queries are still go...
https://proceedings.mlr.press/v202/curth23a.html
https://proceedings.mlr.press/v202/curth23a/curth23a.pdf
https://openreview.net/forum?id=BGv7lLQVWk
Adaptive Identification of Populations with Treatment Benefit in Clinical Trials: Machine Learning Challenges and Solutions
https://proceedings.mlr.press/v202/curth23a.html
Alicia Curth, Alihan Hüyük, Mihaela Van Der Schaar
https://proceedings.mlr.press/v202/curth23a.html
ICML 2023
We study the problem of adaptively identifying patient subpopulations that benefit from a given treatment during a confirmatory clinical trial. This type of adaptive clinical trial has been thoroughly studied in biostatistics, but has been allowed only limited adaptivity so far. Here, we aim to relax classical restrict...
https://proceedings.mlr.press/v202/curth23b.html
https://proceedings.mlr.press/v202/curth23b/curth23b.pdf
https://openreview.net/forum?id=fOSihVI1FW
In Search of Insights, Not Magic Bullets: Towards Demystification of the Model Selection Dilemma in Heterogeneous Treatment Effect Estimation
https://proceedings.mlr.press/v202/curth23b.html
Alicia Curth, Mihaela Van Der Schaar
https://proceedings.mlr.press/v202/curth23b.html
ICML 2023
Personalized treatment effect estimates are often of interest in high-stakes applications – thus, before deploying a model estimating such effects in practice, one needs to be sure that the best candidate from the ever-growing machine learning toolbox for this task was chosen. Unfortunately, due to the absence of count...
https://proceedings.mlr.press/v202/cutkosky23a.html
https://proceedings.mlr.press/v202/cutkosky23a/cutkosky23a.pdf
https://openreview.net/forum?id=GimajxXNc0
Optimal Stochastic Non-smooth Non-convex Optimization through Online-to-Non-convex Conversion
https://proceedings.mlr.press/v202/cutkosky23a.html
Ashok Cutkosky, Harsh Mehta, Francesco Orabona
https://proceedings.mlr.press/v202/cutkosky23a.html
ICML 2023
We present new algorithms for optimizing non-smooth, non-convex stochastic objectives based on a novel analysis technique. This improves the current best-known complexity for finding a $(\delta,\epsilon)$-stationary point from $O(\epsilon^{-4}\delta^{-1})$ stochastic gradient queries to $O(\epsilon^{-3}\delta^{-1})$, w...
https://proceedings.mlr.press/v202/cuturi23a.html
https://proceedings.mlr.press/v202/cuturi23a/cuturi23a.pdf
https://openreview.net/forum?id=KnvZKvOaJ7
Monge, Bregman and Occam: Interpretable Optimal Transport in High-Dimensions with Feature-Sparse Maps
https://proceedings.mlr.press/v202/cuturi23a.html
Marco Cuturi, Michal Klein, Pierre Ablin
https://proceedings.mlr.press/v202/cuturi23a.html
ICML 2023
Optimal transport (OT) theory focuses, among all maps $T:\mathbb{R}^d\rightarrow \mathbb{R}^d$ that can morph a probability measure $\mu$ onto another $\nu$, on those that are the “thriftiest”, i.e. such that the average cost $c(x, T(x))$ between $x$ and its image $T(x)$ is as small as possible. Many computational appr...
https://proceedings.mlr.press/v202/cyffers23a.html
https://proceedings.mlr.press/v202/cyffers23a/cyffers23a.pdf
https://openreview.net/forum?id=CBLDv6SFMn
From Noisy Fixed-Point Iterations to Private ADMM for Centralized and Federated Learning
https://proceedings.mlr.press/v202/cyffers23a.html
Edwige Cyffers, Aurélien Bellet, Debabrota Basu
https://proceedings.mlr.press/v202/cyffers23a.html
ICML 2023
We study differentially private (DP) machine learning algorithms as instances of noisy fixed-point iterations, in order to derive privacy and utility results from this well-studied framework. We show that this new perspective recovers popular private gradient-based methods like DP-SGD and provides a principled way to d...
https://proceedings.mlr.press/v202/dai23a.html
https://proceedings.mlr.press/v202/dai23a/dai23a.pdf
https://openreview.net/forum?id=HtHFnHrZXu
Chameleon: Adapting to Peer Images for Planting Durable Backdoors in Federated Learning
https://proceedings.mlr.press/v202/dai23a.html
Yanbo Dai, Songze Li
https://proceedings.mlr.press/v202/dai23a.html
ICML 2023
In a federated learning (FL) system, distributed clients upload their local models to a central server to aggregate into a global model. Malicious clients may plant backdoors into the global model through uploading poisoned local models, causing images with specific patterns to be misclassified into some target labels....
https://proceedings.mlr.press/v202/dai23b.html
https://proceedings.mlr.press/v202/dai23b/dai23b.pdf
https://openreview.net/forum?id=7WdMBofQFx
Refined Regret for Adversarial MDPs with Linear Function Approximation
https://proceedings.mlr.press/v202/dai23b.html
Yan Dai, Haipeng Luo, Chen-Yu Wei, Julian Zimmert
https://proceedings.mlr.press/v202/dai23b.html
ICML 2023
We consider learning in an adversarial Markov Decision Process (MDP) where the loss functions can change arbitrarily over $K$ episodes and the state space can be arbitrarily large. We assume that the Q-function of any policy is linear in some known features, that is, a linear function approximation exists. The best exi...
https://proceedings.mlr.press/v202/dai23c.html
https://proceedings.mlr.press/v202/dai23c/dai23c.pdf
https://openreview.net/forum?id=xdCQbljiLI
MultiRobustBench: Benchmarking Robustness Against Multiple Attacks
https://proceedings.mlr.press/v202/dai23c.html
Sihui Dai, Saeed Mahloujifar, Chong Xiang, Vikash Sehwag, Pin-Yu Chen, Prateek Mittal
https://proceedings.mlr.press/v202/dai23c.html
ICML 2023
The bulk of existing research in defending against adversarial examples focuses on defending against a single (typically bounded $\ell_p$-norm) attack, but for a practical setting, machine learning (ML) models should be robust to a wide variety of attacks. In this paper, we present the first unified framework for consi...
https://proceedings.mlr.press/v202/dai23d.html
https://proceedings.mlr.press/v202/dai23d/dai23d.pdf
https://openreview.net/forum?id=fX5I7lGLuG
Moderately Distributional Exploration for Domain Generalization
https://proceedings.mlr.press/v202/dai23d.html
Rui Dai, Yonggang Zhang, Zhen Fang, Bo Han, Xinmei Tian
https://proceedings.mlr.press/v202/dai23d.html
ICML 2023
Domain generalization (DG) aims to tackle the distribution shift between training domains and unknown target domains. Generating new domains is one of the most effective approaches, yet its performance gain depends on the distribution discrepancy between the generated and target domains. Distributionally robust optimiz...
https://proceedings.mlr.press/v202/daley23a.html
https://proceedings.mlr.press/v202/daley23a/daley23a.pdf
https://openreview.net/forum?id=8Lww9LXokZ
Trajectory-Aware Eligibility Traces for Off-Policy Reinforcement Learning
https://proceedings.mlr.press/v202/daley23a.html
Brett Daley, Martha White, Christopher Amato, Marlos C. Machado
https://proceedings.mlr.press/v202/daley23a.html
ICML 2023
Off-policy learning from multistep returns is crucial for sample-efficient reinforcement learning, but counteracting off-policy bias without exacerbating variance is challenging. Classically, off-policy bias is corrected in a per-decision manner: past temporal-difference errors are re-weighted by the instantaneous Impo...
https://proceedings.mlr.press/v202/daneshmand23a.html
https://proceedings.mlr.press/v202/daneshmand23a/daneshmand23a.pdf
https://openreview.net/forum?id=7snQRkYh6I
Efficient displacement convex optimization with particle gradient descent
https://proceedings.mlr.press/v202/daneshmand23a.html
Hadi Daneshmand, Jason D. Lee, Chi Jin
https://proceedings.mlr.press/v202/daneshmand23a.html
ICML 2023
Particle gradient descent, which uses particles to represent a probability measure and performs gradient descent on particles in parallel, is widely used to optimize functions of probability measures. This paper considers particle gradient descent with a finite number of particles and establishes its theoretical guaran...
https://proceedings.mlr.press/v202/dang23a.html
https://proceedings.mlr.press/v202/dang23a/dang23a.pdf
https://openreview.net/forum?id=OWROxDcS10
Multiple Thinking Achieving Meta-Ability Decoupling for Object Navigation
https://proceedings.mlr.press/v202/dang23a.html
Ronghao Dang, Lu Chen, Liuyi Wang, Zongtao He, Chengju Liu, Qijun Chen
https://proceedings.mlr.press/v202/dang23a.html
ICML 2023
We propose a meta-ability decoupling (MAD) paradigm, which brings together various object navigation methods in an architecture system, allowing them to mutually enhance each other and evolve together. Based on the MAD paradigm, we design a multiple thinking (MT) model that leverages distinct thinking to abstract vario...
https://proceedings.mlr.press/v202/dang23b.html
https://proceedings.mlr.press/v202/dang23b/dang23b.pdf
https://openreview.net/forum?id=jjpsFetXJp
Neural Collapse in Deep Linear Networks: From Balanced to Imbalanced Data
https://proceedings.mlr.press/v202/dang23b.html
Hien Dang, Tho Tran Huu, Stanley Osher, Hung The Tran, Nhat Ho, Tan Minh Nguyen
https://proceedings.mlr.press/v202/dang23b.html
ICML 2023
Modern deep neural networks have achieved impressive performance on tasks from image classification to natural language processing. Surprisingly, these complex systems with massive amounts of parameters exhibit the same structural properties in their last-layer features and classifiers across canonical datasets when tr...
https://proceedings.mlr.press/v202/dann23a.html
https://proceedings.mlr.press/v202/dann23a/dann23a.pdf
https://openreview.net/forum?id=skDVsmXjPR
Reinforcement Learning Can Be More Efficient with Multiple Rewards
https://proceedings.mlr.press/v202/dann23a.html
Christoph Dann, Yishay Mansour, Mehryar Mohri
https://proceedings.mlr.press/v202/dann23a.html
ICML 2023
Reward design is one of the most critical and challenging aspects when formulating a task as a reinforcement learning (RL) problem. In practice, it often takes several attempts of reward specification and learning with it in order to find one that leads to sample-efficient learning of the desired behavior. Instead, in ...
https://proceedings.mlr.press/v202/dann23b.html
https://proceedings.mlr.press/v202/dann23b/dann23b.pdf
https://openreview.net/forum?id=bUFUaawOTk
Best of Both Worlds Policy Optimization
https://proceedings.mlr.press/v202/dann23b.html
Christoph Dann, Chen-Yu Wei, Julian Zimmert
https://proceedings.mlr.press/v202/dann23b.html
ICML 2023
Policy optimization methods are popular reinforcement learning algorithms in practice and recent works have build theoretical foundation for them by proving $\sqrt{T}$ regret bounds even when the losses are adversarial. Such bounds are tight in the worst case but often overly pessimistic. In this work, we show that by ...
https://proceedings.mlr.press/v202/das23a.html
https://proceedings.mlr.press/v202/das23a/das23a.pdf
https://openreview.net/forum?id=dFflBEShcI
Image generation with shortest path diffusion
https://proceedings.mlr.press/v202/das23a.html
Ayan Das, Stathi Fotiadis, Anil Batra, Farhang Nabiei, Fengting Liao, Sattar Vakili, Da-Shan Shiu, Alberto Bernacchia
https://proceedings.mlr.press/v202/das23a.html
ICML 2023
The field of image generation has made significant progress thanks to the introduction of Diffusion Models, which learn to progressively reverse a given image corruption. Recently, a few studies introduced alternative ways of corrupting images in Diffusion Models, with an emphasis on blurring. However, these studies ar...
https://proceedings.mlr.press/v202/das23b.html
https://proceedings.mlr.press/v202/das23b/das23b.pdf
https://openreview.net/forum?id=rWGp9FbS0Q
Efficient List-Decodable Regression using Batches
https://proceedings.mlr.press/v202/das23b.html
Abhimanyu Das, Ayush Jain, Weihao Kong, Rajat Sen
https://proceedings.mlr.press/v202/das23b.html
ICML 2023
We demonstrate the use of batches in studying list-decodable linear regression, in which only $\alpha\in (0,1]$ fraction of batches contain genuine samples from a common distribution and the rest can contain arbitrary or even adversarial samples. When genuine batches have $\ge \tilde\Omega(1/\alpha)$ samples each, our ...
https://proceedings.mlr.press/v202/das23c.html
https://proceedings.mlr.press/v202/das23c/das23c.pdf
https://openreview.net/forum?id=3QIUvovsgJ
Beyond Uniform Lipschitz Condition in Differentially Private Optimization
https://proceedings.mlr.press/v202/das23c.html
Rudrajit Das, Satyen Kale, Zheng Xu, Tong Zhang, Sujay Sanghavi
https://proceedings.mlr.press/v202/das23c.html
ICML 2023
Most prior results on differentially private stochastic gradient descent (DP-SGD) are derived under the simplistic assumption of uniform Lipschitzness, i.e., the per-sample gradients are uniformly bounded. We generalize uniform Lipschitzness by assuming that the per-sample gradients have sample-dependent upper bounds, ...
https://proceedings.mlr.press/v202/das23d.html
https://proceedings.mlr.press/v202/das23d/das23d.pdf
https://openreview.net/forum?id=UL9purXHyB
Understanding Self-Distillation in the Presence of Label Noise
https://proceedings.mlr.press/v202/das23d.html
Rudrajit Das, Sujay Sanghavi
https://proceedings.mlr.press/v202/das23d.html
ICML 2023
Self-distillation (SD) is the process of first training a "teacher" model and then using its predictions to train a "student" model that has the same architecture. Specifically, the student’s loss is $\big(\xi*\ell(\text{teacher’s predictions}, \text{ student’s predictions}) + (1-\xi)*\ell(\text{given labels}, \text{ s...
https://proceedings.mlr.press/v202/datta23a.html
https://proceedings.mlr.press/v202/datta23a/datta23a.pdf
https://openreview.net/forum?id=oqkckmjCYp
Interval Bound Interpolation for Few-shot Learning with Few Tasks
https://proceedings.mlr.press/v202/datta23a.html
Shounak Datta, Sankha Subhra Mullick, Anish Chakrabarty, Swagatam Das
https://proceedings.mlr.press/v202/datta23a.html
ICML 2023
Few-shot learning aims to transfer the knowledge acquired from training on a diverse set of tasks to unseen tasks from the same task distribution, with a limited amount of labeled data. The underlying requirement for effective few-shot generalization is to learn a good representation of the task manifold. This becomes ...
https://proceedings.mlr.press/v202/daulton23a.html
https://proceedings.mlr.press/v202/daulton23a/daulton23a.pdf
https://openreview.net/forum?id=aX9jtC2lfS
Hypervolume Knowledge Gradient: A Lookahead Approach for Multi-Objective Bayesian Optimization with Partial Information
https://proceedings.mlr.press/v202/daulton23a.html
Sam Daulton, Maximilian Balandat, Eytan Bakshy
https://proceedings.mlr.press/v202/daulton23a.html
ICML 2023
Bayesian optimization is a popular method for sample efficient multi-objective optimization. However, existing Bayesian optimization techniques fail to effectively exploit common and often-neglected problem structure such as decoupled evaluations, where objectives can be queried independently from one another and each ...
https://proceedings.mlr.press/v202/davies23a.html
https://proceedings.mlr.press/v202/davies23a/davies23a.pdf
https://openreview.net/forum?id=OUjObDqOM2
Fast Combinatorial Algorithms for Min Max Correlation Clustering
https://proceedings.mlr.press/v202/davies23a.html
Sami Davies, Benjamin Moseley, Heather Newman
https://proceedings.mlr.press/v202/davies23a.html
ICML 2023
We introduce fast algorithms for correlation clustering with respect to the Min Max objective that provide constant factor approximations on complete graphs. Our algorithms are the first purely combinatorial approximation algorithms for this problem. We construct a novel semi-metric on the set of vertices, which we cal...
https://proceedings.mlr.press/v202/davies23b.html
https://proceedings.mlr.press/v202/davies23b/davies23b.pdf
https://openreview.net/forum?id=UTtYSDO1MK
Predictive Flows for Faster Ford-Fulkerson
https://proceedings.mlr.press/v202/davies23b.html
Sami Davies, Benjamin Moseley, Sergei Vassilvitskii, Yuyan Wang
https://proceedings.mlr.press/v202/davies23b.html
ICML 2023
Recent work has shown that leveraging learned predictions can improve the running time of algorithms for bipartite matching and similar combinatorial problems. In this work, we build on this idea to improve the performance of the widely used Ford-Fulkerson algorithm for computing maximum flows by seeding Ford-Fulkerson...
https://proceedings.mlr.press/v202/davies23c.html
https://proceedings.mlr.press/v202/davies23c/davies23c.pdf
https://openreview.net/forum?id=UeCasRZMj5
The Persistent Laplacian for Data Science: Evaluating Higher-Order Persistent Spectral Representations of Data
https://proceedings.mlr.press/v202/davies23c.html
Thomas Davies, Zhengchao Wan, Ruben J Sanchez-Garcia
https://proceedings.mlr.press/v202/davies23c.html
ICML 2023
Persistent homology is arguably the most successful technique in Topological Data Analysis. It combines homology, a topological feature of a data set, with persistence, which tracks the evolution of homology over different scales. The persistent Laplacian is a recent theoretical development that combines persistence wi...
https://proceedings.mlr.press/v202/daw23a.html
https://proceedings.mlr.press/v202/daw23a/daw23a.pdf
https://openreview.net/forum?id=rhvb4kprWB
Mitigating Propagation Failures in Physics-informed Neural Networks using Retain-Resample-Release (R3) Sampling
https://proceedings.mlr.press/v202/daw23a.html
Arka Daw, Jie Bu, Sifan Wang, Paris Perdikaris, Anuj Karpatne
https://proceedings.mlr.press/v202/daw23a.html
ICML 2023
Despite the success of physics-informed neural networks (PINNs) in approximating partial differential equations (PDEs), PINNs can sometimes fail to converge to the correct solution in problems involving complicated PDEs. This is reflected in several recent studies on characterizing the "failure modes" of PINNs, althoug...
https://proceedings.mlr.press/v202/dbouk23a.html
https://proceedings.mlr.press/v202/dbouk23a/dbouk23a.pdf
https://openreview.net/forum?id=2aytHX3LRf
On the Robustness of Randomized Ensembles to Adversarial Perturbations
https://proceedings.mlr.press/v202/dbouk23a.html
Hassan Dbouk, Naresh Shanbhag
https://proceedings.mlr.press/v202/dbouk23a.html
ICML 2023
Randomized ensemble classifiers (RECs), where one classifier is randomly selected during inference, have emerged as an attractive alternative to traditional ensembling methods for realizing adversarially robust classifiers with limited compute requirements. However, recent works have shown that existing methods for con...
https://proceedings.mlr.press/v202/de-jong23a.html
https://proceedings.mlr.press/v202/de-jong23a/de-jong23a.pdf
https://openreview.net/forum?id=nlUAvrMbUZ
Pre-computed memory or on-the-fly encoding? A hybrid approach to retrieval augmentation makes the most of your compute
https://proceedings.mlr.press/v202/de-jong23a.html
Michiel De Jong, Yury Zemlyanskiy, Nicholas Fitzgerald, Joshua Ainslie, Sumit Sanghai, Fei Sha, William W. Cohen
https://proceedings.mlr.press/v202/de-jong23a.html
ICML 2023
Retrieval-augmented language models such as Fusion-in-Decoder are powerful, setting the state of the art on a variety of knowledge-intensive tasks. However, they are also expensive, due to the need to encode a large number of retrieved passages. Some work avoids this cost by pre-encoding a text corpus into a memory and...
https://proceedings.mlr.press/v202/de-oliveira-fonseca23a.html
https://proceedings.mlr.press/v202/de-oliveira-fonseca23a/de-oliveira-fonseca23a.pdf
https://openreview.net/forum?id=RnZhB7kNl0
Continuous Spatiotemporal Transformer
https://proceedings.mlr.press/v202/de-oliveira-fonseca23a.html
Antonio Henrique De Oliveira Fonseca, Emanuele Zappala, Josue Ortega Caro, David Van Dijk
https://proceedings.mlr.press/v202/de-oliveira-fonseca23a.html
ICML 2023
Modeling spatiotemporal dynamical systems is a fundamental challenge in machine learning. Transformer models have been very successful in NLP and computer vision where they provide interpretable representations of data. However, a limitation of transformers in modeling continuous dynamical systems is that they are fund...
https://proceedings.mlr.press/v202/de-silva23a.html
https://proceedings.mlr.press/v202/de-silva23a/de-silva23a.pdf
https://openreview.net/forum?id=8D3SsQlRbY
The Value of Out-of-Distribution Data
https://proceedings.mlr.press/v202/de-silva23a.html
Ashwin De Silva, Rahul Ramesh, Carey Priebe, Pratik Chaudhari, Joshua T Vogelstein
https://proceedings.mlr.press/v202/de-silva23a.html
ICML 2023
Generalization error always improves with more in-distribution data. However, it is an open question what happens as we add out-of-distribution (OOD) data. Intuitively, if the OOD data is quite different, it seems more data would harm generalization error, though if the OOD data are sufficiently similar, much empirical...
https://proceedings.mlr.press/v202/de-sousa-ribeiro23a.html
https://proceedings.mlr.press/v202/de-sousa-ribeiro23a/de-sousa-ribeiro23a.pdf
https://openreview.net/forum?id=DA0PROpwan
High Fidelity Image Counterfactuals with Probabilistic Causal Models
https://proceedings.mlr.press/v202/de-sousa-ribeiro23a.html
Fabio De Sousa Ribeiro, Tian Xia, Miguel Monteiro, Nick Pawlowski, Ben Glocker
https://proceedings.mlr.press/v202/de-sousa-ribeiro23a.html
ICML 2023
We present a general causal generative modelling framework for accurate estimation of high fidelity image counterfactuals with deep structural causal models. Estimation of interventional and counterfactual queries for high-dimensional structured variables, such as images, remains a challenging task. We leverage ideas f...
https://proceedings.mlr.press/v202/dedieu23a.html
https://proceedings.mlr.press/v202/dedieu23a/dedieu23a.pdf
https://openreview.net/forum?id=VTkBZayJos
Learning Noisy OR Bayesian Networks with Max-Product Belief Propagation
https://proceedings.mlr.press/v202/dedieu23a.html
Antoine Dedieu, Guangyao Zhou, Dileep George, Miguel Lazaro-Gredilla
https://proceedings.mlr.press/v202/dedieu23a.html
ICML 2023
Noisy-OR Bayesian Networks (BNs) are a family of probabilistic graphical models which express rich statistical dependencies in binary data. Variational inference (VI) has been the main method proposed to learn noisy-OR BNs with complex latent structures (Jaakkola & Jordan, 1999; Ji et al., 2020; Buhai et al., 2020). Ho...
https://proceedings.mlr.press/v202/defazio23a.html
https://proceedings.mlr.press/v202/defazio23a/defazio23a.pdf
https://openreview.net/forum?id=GXZ6cT5cvY
Learning-Rate-Free Learning by D-Adaptation
https://proceedings.mlr.press/v202/defazio23a.html
Aaron Defazio, Konstantin Mishchenko
https://proceedings.mlr.press/v202/defazio23a.html
ICML 2023
The speed of gradient descent for convex Lipschitz functions is highly dependent on the choice of learning rate. Setting the learning rate to achieve the optimal convergence rate requires knowing the distance D from the initial point to the solution set. In this work, we describe a single-loop method, with no back-trac...
https://proceedings.mlr.press/v202/dehghani23a.html
https://proceedings.mlr.press/v202/dehghani23a/dehghani23a.pdf
https://openreview.net/forum?id=Lhyy8H75KA
Scaling Vision Transformers to 22 Billion Parameters
https://proceedings.mlr.press/v202/dehghani23a.html
Mostafa Dehghani, Josip Djolonga, Basil Mustafa, Piotr Padlewski, Jonathan Heek, Justin Gilmer, Andreas Peter Steiner, Mathilde Caron, Robert Geirhos, Ibrahim Alabdulmohsin, Rodolphe Jenatton, Lucas Beyer, Michael Tschannen, Anurag Arnab, Xiao Wang, Carlos Riquelme Ruiz, Matthias Minderer, Joan Puigcerver, Utku Evci, M...
https://proceedings.mlr.press/v202/dehghani23a.html
ICML 2023
The scaling of Transformers has driven breakthrough capabilities for language models. At present, the largest large language models (LLMs) contain upwards of 100B parameters. Vision Transformers (ViT) have introduced the same architecture to image and video modelling, but these have not yet been successfully scaled to ...
https://proceedings.mlr.press/v202/delattre23a.html
https://proceedings.mlr.press/v202/delattre23a/delattre23a.pdf
https://openreview.net/forum?id=Z0ATKIJR8G
Efficient Bound of Lipschitz Constant for Convolutional Layers by Gram Iteration
https://proceedings.mlr.press/v202/delattre23a.html
Blaise Delattre, Quentin Barthélemy, Alexandre Araujo, Alexandre Allauzen
https://proceedings.mlr.press/v202/delattre23a.html
ICML 2023
Since the control of the Lipschitz constant has a great impact on the training stability, generalization, and robustness of neural networks, the estimation of this value is nowadays a real scientific challenge. In this paper we introduce a precise, fast, and differentiable upper bound for the spectral norm of convoluti...
https://proceedings.mlr.press/v202/demirovic23a.html
https://proceedings.mlr.press/v202/demirovic23a/demirovic23a.pdf
https://openreview.net/forum?id=0rO3nlTlbG
Blossom: an Anytime Algorithm for Computing Optimal Decision Trees
https://proceedings.mlr.press/v202/demirovic23a.html
Emir Demirović, Emmanuel Hebrard, Louis Jean
https://proceedings.mlr.press/v202/demirovic23a.html
ICML 2023
We propose a simple algorithm to learn optimal decision trees of bounded depth. This algorithm is essentially an anytime version of the state-of-the-art dynamic programming approach. It has virtually no overhead compared to heuristic methods and is comparable to the best exact methods to prove optimality on most data s...
https://proceedings.mlr.press/v202/deng23a.html
https://proceedings.mlr.press/v202/deng23a/deng23a.pdf
https://openreview.net/forum?id=BTwEqF0s34
Optimizing NOTEARS Objectives via Topological Swaps
https://proceedings.mlr.press/v202/deng23a.html
Chang Deng, Kevin Bello, Bryon Aragam, Pradeep Kumar Ravikumar
https://proceedings.mlr.press/v202/deng23a.html
ICML 2023
Recently, an intriguing class of non-convex optimization problems has emerged in the context of learning directed acyclic graphs (DAGs). These problems involve minimizing a given loss or score function, subject to a non-convex continuous constraint that penalizes the presence of cycles in a graph. In this work, we delv...
https://proceedings.mlr.press/v202/deng23b.html
https://proceedings.mlr.press/v202/deng23b/deng23b.pdf
https://openreview.net/forum?id=kbbpaKhXmN
Uncertainty Estimation by Fisher Information-based Evidential Deep Learning
https://proceedings.mlr.press/v202/deng23b.html
Danruo Deng, Guangyong Chen, Yang Yu, Furui Liu, Pheng-Ann Heng
https://proceedings.mlr.press/v202/deng23b.html
ICML 2023
Uncertainty estimation is a key factor that makes deep learning reliable in practical applications. Recently proposed evidential neural networks explicitly account for different uncertainties by treating the network’s outputs as evidence to parameterize the Dirichlet distribution, and achieve impressive performance in ...