abs
stringlengths
44
64
Download PDF
stringlengths
75
115
OpenReview
stringlengths
42
42
title
stringlengths
15
148
url
stringlengths
44
64
authors
stringlengths
6
903
detail_url
stringlengths
44
64
tags
stringclasses
1 value
abstract
stringlengths
422
5.84k
https://proceedings.mlr.press/v235/balazevic24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/balazevic24a/balazevic24a.pdf
https://openreview.net/forum?id=qeFgvVVAJ2
Memory Consolidation Enables Long-Context Video Understanding
https://proceedings.mlr.press/v235/balazevic24a.html
Ivana Balazevic, Yuge Shi, Pinelopi Papalampidi, Rahma Chaabouni, Skanda Koppula, Olivier J Henaff
https://proceedings.mlr.press/v235/balazevic24a.html
ICML 2024
Most transformer-based video encoders are limited to short temporal contexts due to their quadratic complexity. While various attempts have been made to extend this context, this has often come at the cost of both conceptual and computational complexity. We propose to instead re-purpose existing pre-trained video trans...
https://proceedings.mlr.press/v235/balestriero24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/balestriero24a/balestriero24a.pdf
https://openreview.net/forum?id=glfcwSsks8
Characterizing Large Language Model Geometry Helps Solve Toxicity Detection and Generation
https://proceedings.mlr.press/v235/balestriero24a.html
Randall Balestriero, Romain Cosentino, Sarath Shekkizhar
https://proceedings.mlr.press/v235/balestriero24a.html
ICML 2024
Large Language Models (LLMs) drive current AI breakthroughs despite very little being known about their internal representations. In this work, we propose to shed the light on LLMs inner mechanisms through the lens of geometry. In particular, we develop in closed form $(i)$ the intrinsic dimension in which the Multi-He...
https://proceedings.mlr.press/v235/balestriero24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/balestriero24b/balestriero24b.pdf
https://openreview.net/forum?id=XsDWw1Mn2p
How Learning by Reconstruction Produces Uninformative Features For Perception
https://proceedings.mlr.press/v235/balestriero24b.html
Randall Balestriero, Yann Lecun
https://proceedings.mlr.press/v235/balestriero24b.html
ICML 2024
Input space reconstruction is an attractive representation learning paradigm. Despite interpretability benefit of reconstruction and generation, we identify a misalignment between learning to reconstruct, and learning for perception. We show that the former allocates a model’s capacity towards a subspace of the data ex...
https://proceedings.mlr.press/v235/balmaseda24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/balmaseda24a/balmaseda24a.pdf
https://openreview.net/forum?id=FpbKoIPHxb
Combinatorial Approximations for Cluster Deletion: Simpler, Faster, and Better
https://proceedings.mlr.press/v235/balmaseda24a.html
Vicente Balmaseda, Ying Xu, Yixin Cao, Nate Veldt
https://proceedings.mlr.press/v235/balmaseda24a.html
ICML 2024
Cluster deletion is an NP-hard graph clustering objective with applications in computational biology and social network analysis, where the goal is to delete a minimum number of edges to partition a graph into cliques. We first provide a tighter analysis of two previous approximation algorithms, improving their approxi...
https://proceedings.mlr.press/v235/balseiro24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/balseiro24a/balseiro24a.pdf
https://openreview.net/forum?id=HTMFUKAm8B
A Field Guide for Pacing Budget and ROS Constraints
https://proceedings.mlr.press/v235/balseiro24a.html
Santiago R. Balseiro, Kshipra Bhawalkar, Zhe Feng, Haihao Lu, Vahab Mirrokni, Balasubramanian Sivan, Di Wang
https://proceedings.mlr.press/v235/balseiro24a.html
ICML 2024
Budget pacing is a popular service that has been offered by major internet advertising platforms since their inception. In the past few years, autobidding products that provide real-time bidding as a service to advertisers have seen a prominent rise in adoption. A popular autobidding stategy is value maximization subje...
https://proceedings.mlr.press/v235/balsells-rodas24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/balsells-rodas24a/balsells-rodas24a.pdf
https://openreview.net/forum?id=Eew3yUQQtE
On the Identifiability of Switching Dynamical Systems
https://proceedings.mlr.press/v235/balsells-rodas24a.html
Carles Balsells-Rodas, Yixin Wang, Yingzhen Li
https://proceedings.mlr.press/v235/balsells-rodas24a.html
ICML 2024
The identifiability of latent variable models has received increasing attention due to its relevance in interpretability and out-of-distribution generalisation. In this work, we study the identifiability of Switching Dynamical Systems, taking an initial step toward extending identifiability analysis to sequential laten...
https://proceedings.mlr.press/v235/bamas24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/bamas24a/bamas24a.pdf
https://openreview.net/forum?id=b9uHveqszc
Analyzing $D^α$ seeding for $k$-means
https://proceedings.mlr.press/v235/bamas24a.html
Etienne Bamas, Sai Ganesh Nagarajan, Ola Svensson
https://proceedings.mlr.press/v235/bamas24a.html
ICML 2024
One of the most popular clustering algorithms is the celebrated $D^\alpha$ seeding algorithm (also know as $k$-means++ when $\alpha=2$) by Arthur and Vassilvitskii (2007), who showed that it guarantees in expectation an $O(2^{2\alpha}\cdot \log k)$-approximate solution to the ($k$,$\alpha$)-clustering cost (where dista...
https://proceedings.mlr.press/v235/bampis24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/bampis24a/bampis24a.pdf
https://openreview.net/forum?id=AD5QC1BTJL
Parsimonious Learning-Augmented Approximations for Dense Instances of $\mathcalNP$-hard Problems
https://proceedings.mlr.press/v235/bampis24a.html
Evripidis Bampis, Bruno Escoffier, Michalis Xefteris
https://proceedings.mlr.press/v235/bampis24a.html
ICML 2024
The classical work of (Arora et al., 1999) provides a scheme that gives, for any $\epsilon>0$, a polynomial time $1-\epsilon$ approximation algorithm for dense instances of a family of $\mathcal{NP}$-hard problems, such as Max-CUT and Max-$k$-SAT. In this paper we extend and speed up this scheme using a logarithmic num...
https://proceedings.mlr.press/v235/ban24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ban24a/ban24a.pdf
https://openreview.net/forum?id=KLmWRMg6nL
Fair Resource Allocation in Multi-Task Learning
https://proceedings.mlr.press/v235/ban24a.html
Hao Ban, Kaiyi Ji
https://proceedings.mlr.press/v235/ban24a.html
ICML 2024
By jointly learning multiple tasks, multi-task learning (MTL) can leverage the shared knowledge across tasks, resulting in improved data efficiency and generalization performance. However, a major challenge in MTL lies in the presence of conflicting gradients, which can hinder the fair optimization of some tasks and su...
https://proceedings.mlr.press/v235/band24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/band24a/band24a.pdf
https://openreview.net/forum?id=rJVjQSQ8ye
Linguistic Calibration of Long-Form Generations
https://proceedings.mlr.press/v235/band24a.html
Neil Band, Xuechen Li, Tengyu Ma, Tatsunori Hashimoto
https://proceedings.mlr.press/v235/band24a.html
ICML 2024
Language models (LMs) may lead their users to make suboptimal downstream decisions when they confidently hallucinate. This issue can be mitigated by having the LM verbally convey the probability that its claims are correct, but existing models cannot produce long-form text with calibrated confidence statements. Through...
https://proceedings.mlr.press/v235/banerjee24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/banerjee24a/banerjee24a.pdf
https://openreview.net/forum?id=HOG80Yk4Gw
Relational DNN Verification With Cross Executional Bound Refinement
https://proceedings.mlr.press/v235/banerjee24a.html
Debangshu Banerjee, Gagandeep Singh
https://proceedings.mlr.press/v235/banerjee24a.html
ICML 2024
We focus on verifying relational properties defined over deep neural networks (DNNs) such as robustness against universal adversarial perturbations (UAP), certified worst-case hamming distance for binary string classifications, etc. Precise verification of these properties requires reasoning about multiple executions o...
https://proceedings.mlr.press/v235/banihashem24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/banihashem24a/banihashem24a.pdf
https://openreview.net/forum?id=uUeXaKLE1I
A Dynamic Algorithm for Weighted Submodular Cover Problem
https://proceedings.mlr.press/v235/banihashem24a.html
Kiarash Banihashem, Samira Goudarzi, Mohammadtaghi Hajiaghayi, Peyman Jabbarzade, Morteza Monemizadeh
https://proceedings.mlr.press/v235/banihashem24a.html
ICML 2024
We initiate the study of the submodular cover problem in a dynamic setting where the elements of the ground set are inserted and deleted. In the classical submodular cover problem, we are given a monotone submodular function $f : 2^{V} \to \mathbb{R}^{\ge 0}$ and the goal is to obtain a set $S \subseteq V$ that minimiz...
https://proceedings.mlr.press/v235/banihashem24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/banihashem24b/banihashem24b.pdf
https://openreview.net/forum?id=z3PUNzdmGs
Dynamic Metric Embedding into lp Space
https://proceedings.mlr.press/v235/banihashem24b.html
Kiarash Banihashem, Mohammadtaghi Hajiaghayi, Dariusz Rafal Kowalski, Jan Olkowski, Max Springer
https://proceedings.mlr.press/v235/banihashem24b.html
ICML 2024
We give the first non-trivial decremental dynamic embedding of a weighted, undirected graph $G$ into $\ell_p$ space. Given a weighted graph $G$ undergoing a sequence of edge weight increases, the goal of this problem is to maintain a (randomized) mapping $\phi: (G,d) \to (X,\ell_p)$ from the set of vertices of the grap...
https://proceedings.mlr.press/v235/baninajjar24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/baninajjar24a/baninajjar24a.pdf
https://openreview.net/forum?id=gUFufRkzjV
VNN: Verification-Friendly Neural Networks with Hard Robustness Guarantees
https://proceedings.mlr.press/v235/baninajjar24a.html
Anahita Baninajjar, Ahmed Rezine, Amir Aminifar
https://proceedings.mlr.press/v235/baninajjar24a.html
ICML 2024
Machine learning techniques often lack formal correctness guarantees, evidenced by the widespread adversarial examples that plague most deep-learning applications. This lack of formal guarantees resulted in several research efforts that aim at verifying Deep Neural Networks (DNNs), with a particular focus on safety-cri...
https://proceedings.mlr.press/v235/bao24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/bao24a/bao24a.pdf
https://openreview.net/forum?id=yHRxnhKyEJ
Provable Benefits of Local Steps in Heterogeneous Federated Learning for Neural Networks: A Feature Learning Perspective
https://proceedings.mlr.press/v235/bao24a.html
Yajie Bao, Michael Crawshaw, Mingrui Liu
https://proceedings.mlr.press/v235/bao24a.html
ICML 2024
Local steps are crucial for Federated Learning (FL) algorithms and have witnessed great empirical success in reducing communication costs and improving the generalization performance of deep neural networks. However, there are limited studies on the effect of local steps on heterogeneous FL. A few works investigate thi...
https://proceedings.mlr.press/v235/bao24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/bao24b/bao24b.pdf
https://openreview.net/forum?id=aRZjRj41WQ
Self-attention Networks Localize When QK-eigenspectrum Concentrates
https://proceedings.mlr.press/v235/bao24b.html
Han Bao, Ryuichiro Hataya, Ryo Karakida
https://proceedings.mlr.press/v235/bao24b.html
ICML 2024
The self-attention mechanism prevails in modern machine learning. It has an interesting functionality of adaptively selecting tokens from an input sequence by modulating the degree of attention localization, which many researchers speculate is the basis of the powerful model performance but complicates the underlying m...
https://proceedings.mlr.press/v235/bao24c.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/bao24c/bao24c.pdf
https://openreview.net/forum?id=pmcusTywXO
Graph Out-of-Distribution Detection Goes Neighborhood Shaping
https://proceedings.mlr.press/v235/bao24c.html
Tianyi Bao, Qitian Wu, Zetian Jiang, Yiting Chen, Jiawei Sun, Junchi Yan
https://proceedings.mlr.press/v235/bao24c.html
ICML 2024
Despite the rich line of research works on out-of-distribution (OOD) detection on images, the literature on OOD detection for interdependent data, e.g., graphs, is still relatively limited. To fill this gap, we introduce TopoOOD as a principled approach that accommodates graph topology and neighborhood context for dete...
https://proceedings.mlr.press/v235/bar24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/bar24a/bar24a.pdf
https://openreview.net/forum?id=hr8OXXMb7a
Stochastic positional embeddings improve masked image modeling
https://proceedings.mlr.press/v235/bar24a.html
Amir Bar, Florian Bordes, Assaf Shocher, Mido Assran, Pascal Vincent, Nicolas Ballas, Trevor Darrell, Amir Globerson, Yann Lecun
https://proceedings.mlr.press/v235/bar24a.html
ICML 2024
Masked Image Modeling (MIM) is a promising self-supervised learning approach that enables learning from unlabeled images. Despite its recent success, learning good representations through MIM remains challenging because it requires predicting the right semantic content in accurate locations. For example, given an incom...
https://proceedings.mlr.press/v235/bar-shalom24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/bar-shalom24a/bar-shalom24a.pdf
https://openreview.net/forum?id=6djDWVTUEq
Subgraphormer: Unifying Subgraph GNNs and Graph Transformers via Graph Products
https://proceedings.mlr.press/v235/bar-shalom24a.html
Guy Bar-Shalom, Beatrice Bevilacqua, Haggai Maron
https://proceedings.mlr.press/v235/bar-shalom24a.html
ICML 2024
In the realm of Graph Neural Networks (GNNs), two exciting research directions have recently emerged: Subgraph GNNs and Graph Transformers. In this paper, we propose an architecture that integrates both approaches, dubbed Subgraphormer, which combines the enhanced expressive power, message-passing mechanisms, and aggre...
https://proceedings.mlr.press/v235/barbarani24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/barbarani24a/barbarani24a.pdf
https://openreview.net/forum?id=fNJbcxhxRj
Scale-Free Image Keypoints Using Differentiable Persistent Homology
https://proceedings.mlr.press/v235/barbarani24a.html
Giovanni Barbarani, Francesco Vaccarino, Gabriele Trivigno, Marco Guerra, Gabriele Berton, Carlo Masone
https://proceedings.mlr.press/v235/barbarani24a.html
ICML 2024
In computer vision, keypoint detection is a fundamental task, with applications spanning from robotics to image retrieval; however, existing learning-based methods suffer from scale dependency, and lack flexibility. This paper introduces a novel approach that leverages Morse theory and persistent homology, powerful too...
https://proceedings.mlr.press/v235/barbulescu24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/barbulescu24a/barbulescu24a.pdf
https://openreview.net/forum?id=FWlNA3et6X
To Each (Textual Sequence) Its Own: Improving Memorized-Data Unlearning in Large Language Models
https://proceedings.mlr.press/v235/barbulescu24a.html
George-Octavian Bărbulescu, Peter Triantafillou
https://proceedings.mlr.press/v235/barbulescu24a.html
ICML 2024
LLMs have been found to memorize training textual sequences and regurgitate verbatim said sequences during text generation time. This fact is known to be the cause of privacy and related (e.g., copyright) problems. Unlearning in LLMs then takes the form of devising new algorithms that will properly deal with these side...
https://proceedings.mlr.press/v235/bardone24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/bardone24a/bardone24a.pdf
https://openreview.net/forum?id=9iGdh0wAgB
Sliding Down the Stairs: How Correlated Latent Variables Accelerate Learning with Neural Networks
https://proceedings.mlr.press/v235/bardone24a.html
Lorenzo Bardone, Sebastian Goldt
https://proceedings.mlr.press/v235/bardone24a.html
ICML 2024
Neural networks extract features from data using stochastic gradient descent (SGD). In particular, higher-order input cumulants (HOCs) are crucial for their performance. However, extracting information from the $p$th cumulant of $d$-dimensional inputs is computationally hard: the number of samples required to recover a...
https://proceedings.mlr.press/v235/bartoldson24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/bartoldson24a/bartoldson24a.pdf
https://openreview.net/forum?id=HQtTg1try7
Adversarial Robustness Limits via Scaling-Law and Human-Alignment Studies
https://proceedings.mlr.press/v235/bartoldson24a.html
Brian R. Bartoldson, James Diffenderfer, Konstantinos Parasyris, Bhavya Kailkhura
https://proceedings.mlr.press/v235/bartoldson24a.html
ICML 2024
This paper revisits the simple, long-studied, yet still unsolved problem of making image classifiers robust to imperceptible perturbations. Taking CIFAR10 as an example, SOTA clean accuracy is about $100$%, but SOTA robustness to $\ell_{\infty}$-norm bounded perturbations barely exceeds $70$%. To understand this gap, w...
https://proceedings.mlr.press/v235/bartosh24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/bartosh24a/bartosh24a.pdf
https://openreview.net/forum?id=xzX7kf486K
Neural Diffusion Models
https://proceedings.mlr.press/v235/bartosh24a.html
Grigory Bartosh, Dmitry Vetrov, Christian A. Naesseth
https://proceedings.mlr.press/v235/bartosh24a.html
ICML 2024
Diffusion models have shown remarkable performance on many generative tasks. Despite recent success, most diffusion models are restricted in that they only allow linear transformation of the data distribution. In contrast, broader family of transformations can help train generative distributions more efficiently, simpl...
https://proceedings.mlr.press/v235/barzilai24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/barzilai24a/barzilai24a.pdf
https://openreview.net/forum?id=PY3bKuorBI
Generalization in Kernel Regression Under Realistic Assumptions
https://proceedings.mlr.press/v235/barzilai24a.html
Daniel Barzilai, Ohad Shamir
https://proceedings.mlr.press/v235/barzilai24a.html
ICML 2024
It is by now well-established that modern over-parameterized models seem to elude the bias-variance tradeoff and generalize well despite overfitting noise. Many recent works attempt to analyze this phenomenon in the relatively tractable setting of kernel regression. However, as we argue in detail, most past works on th...
https://proceedings.mlr.press/v235/bassan24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/bassan24a/bassan24a.pdf
https://openreview.net/forum?id=veEjiN2w9F
Local vs. Global Interpretability: A Computational Complexity Perspective
https://proceedings.mlr.press/v235/bassan24a.html
Shahaf Bassan, Guy Amir, Guy Katz
https://proceedings.mlr.press/v235/bassan24a.html
ICML 2024
The local and global interpretability of various ML models has been studied extensively in recent years. However, despite significant progress in the field, many known results remain informal or lack sufficient mathematical rigor. We propose a framework for bridging this gap, by using computational complexity theory to...
https://proceedings.mlr.press/v235/bassily24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/bassily24a/bassily24a.pdf
https://openreview.net/forum?id=kkqIEp2bRa
Differentially Private Domain Adaptation with Theoretical Guarantees
https://proceedings.mlr.press/v235/bassily24a.html
Raef Bassily, Corinna Cortes, Anqi Mao, Mehryar Mohri
https://proceedings.mlr.press/v235/bassily24a.html
ICML 2024
In many applications, the labeled data at the learner’s disposal is subject to privacy constraints and is relatively limited. To derive a more accurate predictor for the target domain, it is often beneficial to leverage publicly available labeled data from an alternative domain, somewhat close to the target domain. Thi...
https://proceedings.mlr.press/v235/basu24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/basu24a/basu24a.pdf
https://openreview.net/forum?id=A9MiJdetnZ
A Statistical Framework for Data-dependent Retrieval-Augmented Models
https://proceedings.mlr.press/v235/basu24a.html
Soumya Basu, Ankit Singh Rawat, Manzil Zaheer
https://proceedings.mlr.press/v235/basu24a.html
ICML 2024
Modern ML systems increasingly augment input instances with additional relevant information to enhance final prediction. Despite growing interest in such retrieval-augmented models, their fundamental properties and training are not well understood. We propose a statistical framework to study such models with two compon...
https://proceedings.mlr.press/v235/basu24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/basu24b/basu24b.pdf
https://openreview.net/forum?id=fsVBsxjRER
On Mechanistic Knowledge Localization in Text-to-Image Generative Models
https://proceedings.mlr.press/v235/basu24b.html
Samyadeep Basu, Keivan Rezaei, Priyatham Kattakinda, Vlad I Morariu, Nanxuan Zhao, Ryan A. Rossi, Varun Manjunatha, Soheil Feizi
https://proceedings.mlr.press/v235/basu24b.html
ICML 2024
Identifying layers within text-to-image models which control visual attributes can facilitate efficient model editing through closed-form updates. Recent work, leveraging causal tracing show that early Stable-Diffusion variants confine knowledge primarily to the first layer of the CLIP text-encoder, while it diffuses t...
https://proceedings.mlr.press/v235/bechavod24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/bechavod24a/bechavod24a.pdf
https://openreview.net/forum?id=6EF0bxcZvT
Monotone Individual Fairness
https://proceedings.mlr.press/v235/bechavod24a.html
Yahav Bechavod
https://proceedings.mlr.press/v235/bechavod24a.html
ICML 2024
We revisit the problem of online learning with individual fairness, where an online learner strives to maximize predictive accuracy while ensuring that similar individuals are treated similarly. We first extend the frameworks of Gillen et al. (2018); Bechavod et al. (2020), which rely on feedback from human auditors re...
https://proceedings.mlr.press/v235/bechler-speicher24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/bechler-speicher24a/bechler-speicher24a.pdf
https://openreview.net/forum?id=fSNHK7mu3j
Graph Neural Networks Use Graphs When They Shouldn’t
https://proceedings.mlr.press/v235/bechler-speicher24a.html
Maya Bechler-Speicher, Ido Amos, Ran Gilad-Bachrach, Amir Globerson
https://proceedings.mlr.press/v235/bechler-speicher24a.html
ICML 2024
Predictions over graphs play a crucial role in various domains, including social networks and medicine. Graph Neural Networks (GNNs) have emerged as the dominant approach for learning on graph data. Although a graph-structure is provided as input to the GNN, in some cases the best solution can be obtained by ignoring i...
https://proceedings.mlr.press/v235/beck24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/beck24a/beck24a.pdf
https://openreview.net/forum?id=43HZG9zwaj
Diffusion Tempering Improves Parameter Estimation with Probabilistic Integrators for Ordinary Differential Equations
https://proceedings.mlr.press/v235/beck24a.html
Jonas Beck, Nathanael Bosch, Michael Deistler, Kyra L. Kadhim, Jakob H. Macke, Philipp Hennig, Philipp Berens
https://proceedings.mlr.press/v235/beck24a.html
ICML 2024
Ordinary differential equations (ODEs) are widely used to describe dynamical systems in science, but identifying parameters that explain experimental measurements is challenging. In particular, although ODEs are differentiable and would allow for gradient-based parameter optimization, the nonlinear dynamics of ODEs oft...
https://proceedings.mlr.press/v235/becker24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/becker24a/becker24a.pdf
https://openreview.net/forum?id=CvRu2inbGV
Standardized Interpretable Fairness Measures for Continuous Risk Scores
https://proceedings.mlr.press/v235/becker24a.html
Ann-Kristin Becker, Oana Dumitrasc, Klaus Broelemann
https://proceedings.mlr.press/v235/becker24a.html
ICML 2024
We propose a standardized version of fairness measures for continuous scores with a reasonable interpretation based on the Wasserstein distance. Our measures are easily computable and well suited for quantifying and interpreting the strength of group disparities as well as for comparing biases across different models, ...
https://proceedings.mlr.press/v235/behrouz24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/behrouz24a/behrouz24a.pdf
https://openreview.net/forum?id=nOjZfpLyh1
Unsupervised Representation Learning of Brain Activity via Bridging Voxel Activity and Functional Connectivity
https://proceedings.mlr.press/v235/behrouz24a.html
Ali Behrouz, Parsa Delavari, Farnoosh Hashemi
https://proceedings.mlr.press/v235/behrouz24a.html
ICML 2024
Effective brain representation learning is a key step toward the understanding of cognitive processes and diagnosis of neurological diseases/disorders. Existing studies have focused on either (1) voxel-level activity, where only a single weight relating the voxel activity to the task (i.e., aggregation of voxel activit...
https://proceedings.mlr.press/v235/belrose24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/belrose24a/belrose24a.pdf
https://openreview.net/forum?id=IGdpKP0N6w
Neural Networks Learn Statistics of Increasing Complexity
https://proceedings.mlr.press/v235/belrose24a.html
Nora Belrose, Quintin Pope, Lucia Quirke, Alex Troy Mallen, Xiaoli Fern
https://proceedings.mlr.press/v235/belrose24a.html
ICML 2024
The distributional simplicity bias (DSB) posits that neural networks learn low-order moments of the data distribution first, before moving on to higher-order correlations. In this work, we present compelling new evidence for the DSB by showing that networks automatically learn to perform well on maximum-entropy distrib...
https://proceedings.mlr.press/v235/ben-basat24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ben-basat24a/ben-basat24a.pdf
https://openreview.net/forum?id=gWEwIlZrbQ
Accelerating Federated Learning with Quick Distributed Mean Estimation
https://proceedings.mlr.press/v235/ben-basat24a.html
Ran Ben-Basat, Shay Vargaftik, Amit Portnoy, Gil Einziger, Yaniv Ben-Itzhak, Michael Mitzenmacher
https://proceedings.mlr.press/v235/ben-basat24a.html
ICML 2024
Distributed Mean Estimation (DME), in which $n$ clients communicate vectors to a parameter server that estimates their average, is a fundamental building block in communication-efficient federated learning. In this paper, we improve on previous DME techniques that achieve the optimal $O(1/n)$ Normalized Mean Squared Er...
https://proceedings.mlr.press/v235/ben-dov24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ben-dov24a/ben-dov24a.pdf
https://openreview.net/forum?id=Ez3Lckpe4l
The Role of Learning Algorithms in Collective Action
https://proceedings.mlr.press/v235/ben-dov24a.html
Omri Ben-Dov, Jake Fawkes, Samira Samadi, Amartya Sanyal
https://proceedings.mlr.press/v235/ben-dov24a.html
ICML 2024
Collective action in machine learning is the study of the control that a coordinated group can have over machine learning algorithms. While previous research has concentrated on assessing the impact of collectives against Bayes (sub-)optimal classifiers, this perspective is limited in that it does not account for the c...
https://proceedings.mlr.press/v235/ben-hamu24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ben-hamu24a/ben-hamu24a.pdf
https://openreview.net/forum?id=SE20BFqj6J
D-Flow: Differentiating through Flows for Controlled Generation
https://proceedings.mlr.press/v235/ben-hamu24a.html
Heli Ben-Hamu, Omri Puny, Itai Gat, Brian Karrer, Uriel Singer, Yaron Lipman
https://proceedings.mlr.press/v235/ben-hamu24a.html
ICML 2024
Taming the generation outcome of state of the art Diffusion and Flow-Matching (FM) models without having to re-train a task-specific model unlocks a powerful tool for solving inverse problems, conditional generation, and controlled generation in general. In this work we introduce D-Flow, a simple framework for controll...
https://proceedings.mlr.press/v235/benkert24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/benkert24a/benkert24a.pdf
https://openreview.net/forum?id=zII3Olw7cr
Transitional Uncertainty with Layered Intermediate Predictions
https://proceedings.mlr.press/v235/benkert24a.html
Ryan Benkert, Mohit Prabhushankar, Ghassan Alregib
https://proceedings.mlr.press/v235/benkert24a.html
ICML 2024
In this paper, we discuss feature engineering for single-pass uncertainty estimation. For accurate uncertainty estimates, neural networks must extract differences in the feature space that quantify uncertainty. This could be achieved by current single-pass approaches that maintain feature distances between data points ...
https://proceedings.mlr.press/v235/benomar24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/benomar24a/benomar24a.pdf
https://openreview.net/forum?id=jJLcXGB2uA
Non-clairvoyant Scheduling with Partial Predictions
https://proceedings.mlr.press/v235/benomar24a.html
Ziyad Benomar, Vianney Perchet
https://proceedings.mlr.press/v235/benomar24a.html
ICML 2024
The non-clairvoyant scheduling problem has gained new interest within learning-augmented algorithms, where the decision-maker is equipped with predictions without any quality guarantees. In practical settings, access to predictions may be reduced to specific instances, due to cost or data limitations. Our investigation...
https://proceedings.mlr.press/v235/berman24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/berman24a/berman24a.pdf
https://openreview.net/forum?id=AocOA4h3bu
Sequential Disentanglement by Extracting Static Information From A Single Sequence Element
https://proceedings.mlr.press/v235/berman24a.html
Nimrod Berman, Ilan Naiman, Idan Arbiv, Gal Fadlon, Omri Azencot
https://proceedings.mlr.press/v235/berman24a.html
ICML 2024
One of the fundamental representation learning tasks is unsupervised sequential disentanglement, where latent codes of inputs are decomposed to a single static factor and a sequence of dynamic factors. To extract this latent information, existing methods condition the static and dynamic codes on the entire input sequen...
https://proceedings.mlr.press/v235/berman24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/berman24b/berman24b.pdf
https://openreview.net/forum?id=iHSgfGob9j
CoLoRA: Continuous low-rank adaptation for reduced implicit neural modeling of parameterized partial differential equations
https://proceedings.mlr.press/v235/berman24b.html
Jules Berman, Benjamin Peherstorfer
https://proceedings.mlr.press/v235/berman24b.html
ICML 2024
This work introduces reduced models based on Continuous Low Rank Adaptation (CoLoRA) that pre-train neural networks for a given partial differential equation and then continuously adapt low-rank weights in time to rapidly predict the evolution of solution fields at new physics parameters and new initial conditions. The...
https://proceedings.mlr.press/v235/bertolotti24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/bertolotti24a/bertolotti24a.pdf
https://openreview.net/forum?id=yyYMAprcAR
By Tying Embeddings You Are Assuming the Distributional Hypothesis
https://proceedings.mlr.press/v235/bertolotti24a.html
Francesco Bertolotti, Walter Cazzola
https://proceedings.mlr.press/v235/bertolotti24a.html
ICML 2024
In this work, we analyze both theoretically and empirically the effect of tied input-output embeddings—a popular technique that reduces the model size while often improving training. Interestingly, we found that this technique is connected to Harris (1954)’s distributional hypothesis—often portrayed by the famous Firth...
https://proceedings.mlr.press/v235/bettini24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/bettini24a/bettini24a.pdf
https://openreview.net/forum?id=qQjUgItPq4
Controlling Behavioral Diversity in Multi-Agent Reinforcement Learning
https://proceedings.mlr.press/v235/bettini24a.html
Matteo Bettini, Ryan Kortvelesy, Amanda Prorok
https://proceedings.mlr.press/v235/bettini24a.html
ICML 2024
The study of behavioral diversity in Multi-Agent Reinforcement Learning (MARL) is a nascent yet promising field. In this context, the present work deals with the question of how to control the diversity of a multi-agent system. With no existing approaches to control diversity to a set value, current solutions focus on ...
https://proceedings.mlr.press/v235/beukman24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/beukman24a/beukman24a.pdf
https://openreview.net/forum?id=LRnXPxDksA
Refining Minimax Regret for Unsupervised Environment Design
https://proceedings.mlr.press/v235/beukman24a.html
Michael Beukman, Samuel Coward, Michael Matthews, Mattie Fellows, Minqi Jiang, Michael D Dennis, Jakob Nicolaus Foerster
https://proceedings.mlr.press/v235/beukman24a.html
ICML 2024
In unsupervised environment design, reinforcement learning agents are trained on environment configurations (levels) generated by an adversary that maximises some objective. Regret is a commonly used objective that theoretically results in a minimax regret (MMR) policy with desirable robustness guarantees; in particula...
https://proceedings.mlr.press/v235/beurer-kellner24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/beurer-kellner24a/beurer-kellner24a.pdf
https://openreview.net/forum?id=pXaEYzrFae
Guiding LLMs The Right Way: Fast, Non-Invasive Constrained Generation
https://proceedings.mlr.press/v235/beurer-kellner24a.html
Luca Beurer-Kellner, Marc Fischer, Martin Vechev
https://proceedings.mlr.press/v235/beurer-kellner24a.html
ICML 2024
To ensure that text generated by large language models (LLMs) is in an expected format, constrained decoding methods propose to enforce strict formal language constraints during generation. However, as we show in this work, not only do such methods often incur performance overhead during generation, but many of them al...
https://proceedings.mlr.press/v235/beurer-kellner24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/beurer-kellner24b/beurer-kellner24b.pdf
https://openreview.net/forum?id=2Yu5FWdzde
Prompt Sketching for Large Language Models
https://proceedings.mlr.press/v235/beurer-kellner24b.html
Luca Beurer-Kellner, Mark Niklas Mueller, Marc Fischer, Martin Vechev
https://proceedings.mlr.press/v235/beurer-kellner24b.html
ICML 2024
Many recent prompting strategies for large language models (LLMs) query the model multiple times sequentially – first to produce intermediate results and then the final answer. However, using these methods, both decoder and model are unaware of potential follow-up prompts, leading to disconnected and undesirably wordy ...
https://proceedings.mlr.press/v235/bewley24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/bewley24a/bewley24a.pdf
https://openreview.net/forum?id=Ad9msn1SKC
Counterfactual Metarules for Local and Global Recourse
https://proceedings.mlr.press/v235/bewley24a.html
Tom Bewley, Salim I. Amoukou, Saumitra Mishra, Daniele Magazzeni, Manuela Veloso
https://proceedings.mlr.press/v235/bewley24a.html
ICML 2024
We introduce T-CREx, a novel model-agnostic method for local and global counterfactual explanation (CE), which summarises recourse options for both individuals and groups in the form of generalised rules. It leverages tree-based surrogate models to learn the counterfactual rules, alongside metarules denoting their regi...
https://proceedings.mlr.press/v235/beznosikov24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/beznosikov24a/beznosikov24a.pdf
https://openreview.net/forum?id=Zw52bJCZXc
Sarah Frank-Wolfe: Methods for Constrained Optimization with Best Rates and Practical Features
https://proceedings.mlr.press/v235/beznosikov24a.html
Aleksandr Beznosikov, David Dobre, Gauthier Gidel
https://proceedings.mlr.press/v235/beznosikov24a.html
ICML 2024
The Frank-Wolfe (FW) method is a popular approach for solving optimization problems with structured constraints that arise in machine learning applications. In recent years, stochastic versions of FW have gained popularity, motivated by large datasets for which the computation of the full gradient is prohibitively expe...
https://proceedings.mlr.press/v235/bharadhwaj24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/bharadhwaj24a/bharadhwaj24a.pdf
https://openreview.net/forum?id=Jtjurj7oIJ
Position: Scaling Simulation is Neither Necessary Nor Sufficient for In-the-Wild Robot Manipulation
https://proceedings.mlr.press/v235/bharadhwaj24a.html
Homanga Bharadhwaj
https://proceedings.mlr.press/v235/bharadhwaj24a.html
ICML 2024
In this paper, we develop a structured critique of robotic simulations for real-world manipulation, by arguing that scaling simulators is neither necessary nor sufficient for making progress in general-purpose real-world robotic manipulation agents that are compliant with human preferences. With the ubiquity of robotic...
https://proceedings.mlr.press/v235/bhattacharya24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/bhattacharya24a/bhattacharya24a.pdf
https://openreview.net/forum?id=rucbIsWoEV
Dynamic Facility Location in High Dimensional Euclidean Spaces
https://proceedings.mlr.press/v235/bhattacharya24a.html
Sayan Bhattacharya, Gramoz Goranci, Shaofeng H.-C. Jiang, Yi Qian, Yubo Zhang
https://proceedings.mlr.press/v235/bhattacharya24a.html
ICML 2024
We study the facility location problem in the dynamic setting, where the goal is to efficiently process an intermixed sequence of point insertions and deletions while maintaining a high quality and stable solution. Although the problem has been studied in the context of general metrics and low-dimensional spaces, much ...
https://proceedings.mlr.press/v235/bhattacharyya24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/bhattacharyya24a/bhattacharyya24a.pdf
https://openreview.net/forum?id=6OSLjErBhh
Total Variation Distance Meets Probabilistic Inference
https://proceedings.mlr.press/v235/bhattacharyya24a.html
Arnab Bhattacharyya, Sutanu Gayen, Kuldeep S. Meel, Dimitrios Myrisiotis, A. Pavan, N. V. Vinodchandran
https://proceedings.mlr.press/v235/bhattacharyya24a.html
ICML 2024
In this paper, we establish a novel connection between total variation (TV) distance estimation and probabilistic inference. In particular, we present an efficient, structure-preserving reduction from relative approximation of TV distance to probabilistic inference over directed graphical models. This reduction leads t...
https://proceedings.mlr.press/v235/bhirangi24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/bhirangi24a/bhirangi24a.pdf
https://openreview.net/forum?id=TK7xkOsXDu
Hierarchical State Space Models for Continuous Sequence-to-Sequence Modeling
https://proceedings.mlr.press/v235/bhirangi24a.html
Raunaq Bhirangi, Chenyu Wang, Venkatesh Pattabiraman, Carmel Majidi, Abhinav Gupta, Tess Hellebrekers, Lerrel Pinto
https://proceedings.mlr.press/v235/bhirangi24a.html
ICML 2024
Reasoning from sequences of raw sensory data is a ubiquitous problem across fields ranging from medical devices to robotics. These problems often involve using long sequences of raw sensor data (e.g. magnetometers, piezoresistors) to predict sequences of desirable physical quantities (e.g. force, inertial measurements)...
https://proceedings.mlr.press/v235/bhowal24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/bhowal24a/bhowal24a.pdf
https://openreview.net/forum?id=Ao9UUaScAU
Why do Variational Autoencoders Really Promote Disentanglement?
https://proceedings.mlr.press/v235/bhowal24a.html
Pratik Bhowal, Achint Soni, Sirisha Rambhatla
https://proceedings.mlr.press/v235/bhowal24a.html
ICML 2024
Despite not being designed for this purpose, the use of variational autoencoders (VAEs) has proven remarkably effective for disentangled representation learning (DRL). Recent research attributes this success to certain characteristics of the loss function that prevent latent space rotation, or hypothesize about the ort...
https://proceedings.mlr.press/v235/bhuyan24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/bhuyan24a/bhuyan24a.pdf
https://openreview.net/forum?id=icijMMWwdG
Best of Both Worlds Guarantees for Smoothed Online Quadratic Optimization
https://proceedings.mlr.press/v235/bhuyan24a.html
Neelkamal Bhuyan, Debankur Mukherjee, Adam Wierman
https://proceedings.mlr.press/v235/bhuyan24a.html
ICML 2024
We study the smoothed online quadratic optimization (SOQO) problem where, at each round $t$, a player plays an action $x_t$ in response to a quadratic hitting cost and an additional squared $\ell_2$-norm cost for switching actions. This problem class has strong connections to a wide range of application domains includi...
https://proceedings.mlr.press/v235/bian24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/bian24a/bian24a.pdf
https://openreview.net/forum?id=Rx9GMufByc
Multi-Patch Prediction: Adapting Language Models for Time Series Representation Learning
https://proceedings.mlr.press/v235/bian24a.html
Yuxuan Bian, Xuan Ju, Jiangtong Li, Zhijian Xu, Dawei Cheng, Qiang Xu
https://proceedings.mlr.press/v235/bian24a.html
ICML 2024
In this study, we present $\text{aL\small{LM}4T\small{S}}$, an innovative framework that adapts Large Language Models (LLMs) for time-series representation learning. Central to our approach is that we reconceive time-series forecasting as a self-supervised, multi-patch prediction task, which, compared to traditional ma...
https://proceedings.mlr.press/v235/bian24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/bian24b/bian24b.pdf
https://openreview.net/forum?id=QhKsE7YAJk
Naive Bayes Classifiers over Missing Data: Decision and Poisoning
https://proceedings.mlr.press/v235/bian24b.html
Song Bian, Xiating Ouyang, Zhiwei Fan, Paraschos Koutris
https://proceedings.mlr.press/v235/bian24b.html
ICML 2024
We study the certifiable robustness of ML classifiers on dirty datasets that could contain missing values. A test point is certifiably robust for an ML classifier if the classifier returns the same prediction for that test point, regardless of which cleaned version (among exponentially many) of the dirty dataset the cl...
https://proceedings.mlr.press/v235/bianchi24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/bianchi24a/bianchi24a.pdf
https://openreview.net/forum?id=CmOmaxkt8p
How Well Can LLMs Negotiate? NegotiationArena Platform and Analysis
https://proceedings.mlr.press/v235/bianchi24a.html
Federico Bianchi, Patrick John Chia, Mert Yuksekgonul, Jacopo Tagliabue, Dan Jurafsky, James Zou
https://proceedings.mlr.press/v235/bianchi24a.html
ICML 2024
Negotiation is the basis of social interactions; humans negotiate everything from the price of cars to how to share common resources. With rapidly growing interest in using large language models (LLMs) to act as agents on behalf of human users, such LLM agents would also need to be able to negotiate. In this paper, we ...
https://proceedings.mlr.press/v235/bianchi24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/bianchi24b/bianchi24b.pdf
https://openreview.net/forum?id=Qc5umSsUi8
Scalable Safe Policy Improvement for Factored Multi-Agent MDPs
https://proceedings.mlr.press/v235/bianchi24b.html
Federico Bianchi, Edoardo Zorzi, Alberto Castellini, Thiago D. Simão, Matthijs T. J. Spaan, Alessandro Farinelli
https://proceedings.mlr.press/v235/bianchi24b.html
ICML 2024
In this work, we focus on safe policy improvement in multi-agent domains where current state-of-the-art methods cannot be effectively applied because of large state and action spaces. We consider recent results using Monte Carlo Tree Search for Safe Policy Improvement with Baseline Bootstrapping and propose a novel alg...
https://proceedings.mlr.press/v235/bica24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/bica24a/bica24a.pdf
https://openreview.net/forum?id=5nxIRQ8GNa
Improving fine-grained understanding in image-text pre-training
https://proceedings.mlr.press/v235/bica24a.html
Ioana Bica, Anastasija Ilic, Matthias Bauer, Goker Erdogan, Matko Bošnjak, Christos Kaplanis, Alexey A. Gritsenko, Matthias Minderer, Charles Blundell, Razvan Pascanu, Jovana Mitrovic
https://proceedings.mlr.press/v235/bica24a.html
ICML 2024
We introduce SPARse fine-grained Contrastive alignment (SPARC), a simple method for pretraining more fine-grained multimodal representations from image-text pairs. Given that multiple image patches often correspond to single words, we propose to learn a grouping of image patches for every token in the caption. To achie...
https://proceedings.mlr.press/v235/biecek24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/biecek24a/biecek24a.pdf
https://openreview.net/forum?id=ooikIHLHCs
Position: Explain to Question not to Justify
https://proceedings.mlr.press/v235/biecek24a.html
Przemyslaw Biecek, Wojciech Samek
https://proceedings.mlr.press/v235/biecek24a.html
ICML 2024
Explainable Artificial Intelligence (XAI) is a young but very promising field of research. Unfortunately, the progress in this field is currently slowed down by divergent and incompatible goals. We separate various threads tangled within the area of XAI into two complementary cultures of human/value-oriented explanatio...
https://proceedings.mlr.press/v235/bini24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/bini24a/bini24a.pdf
https://openreview.net/forum?id=yPDTXQwUPy
ETHER: Efficient Finetuning of Large-Scale Models with Hyperplane Reflections
https://proceedings.mlr.press/v235/bini24a.html
Massimo Bini, Karsten Roth, Zeynep Akata, Anna Khoreva
https://proceedings.mlr.press/v235/bini24a.html
ICML 2024
Parameter-efficient finetuning (PEFT) has become ubiquitous to adapt foundation models to downstream task requirements while retaining their generalization ability. However, the amount of additionally introduced parameters and compute for successful adaptation and hyperparameter searches can explode quickly, especially...
https://proceedings.mlr.press/v235/biparva24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/biparva24a/biparva24a.pdf
https://openreview.net/forum?id=DwniHlwcOB
Incorporating Information into Shapley Values: Reweighting via a Maximum Entropy Approach
https://proceedings.mlr.press/v235/biparva24a.html
Darya Biparva, Donatello Materassi
https://proceedings.mlr.press/v235/biparva24a.html
ICML 2024
Both the marginal contributions needed for the computation of Shapley values and the graph produced by Pearl-Verma theorem rely on the choice of an ordering of the variables. For Shapley values, the marginal contributions are averaged over all orderings, while in causal inference methods, the typical approach is to sel...
https://proceedings.mlr.press/v235/blaauwbroek24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/blaauwbroek24a/blaauwbroek24a.pdf
https://openreview.net/forum?id=A7CtiozznN
Graph2Tac: Online Representation Learning of Formal Math Concepts
https://proceedings.mlr.press/v235/blaauwbroek24a.html
Lasse Blaauwbroek, Mirek Olšák, Jason Rute, Fidel Ivan Schaposnik Massolo, Jelle Piepenbrock, Vasily Pestun
https://proceedings.mlr.press/v235/blaauwbroek24a.html
ICML 2024
In proof assistants, the physical proximity between two formal mathematical concepts is a strong predictor of their mutual relevance. Furthermore, lemmas with close proximity regularly exhibit similar proof structures. We show that this locality property can be exploited through online learning techniques to obtain sol...
https://proceedings.mlr.press/v235/black24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/black24a/black24a.pdf
https://openreview.net/forum?id=3pxMIjB9QK
Biharmonic Distance of Graphs and its Higher-Order Variants: Theoretical Properties with Applications to Centrality and Clustering
https://proceedings.mlr.press/v235/black24a.html
Mitchell Black, Lucy Lin, Weng-Keen Wong, Amir Nayyeri
https://proceedings.mlr.press/v235/black24a.html
ICML 2024
Effective resistance is a distance between vertices of a graph that is both theoretically interesting and useful in applications. We study a variant of effective resistance called the biharmonic distance. While the effective resistance measures how well-connected two vertices are, we prove several theoretical results s...
https://proceedings.mlr.press/v235/black24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/black24b/black24b.pdf
https://openreview.net/forum?id=va3r3hSA6n
Comparing Graph Transformers via Positional Encodings
https://proceedings.mlr.press/v235/black24b.html
Mitchell Black, Zhengchao Wan, Gal Mishne, Amir Nayyeri, Yusu Wang
https://proceedings.mlr.press/v235/black24b.html
ICML 2024
The distinguishing power of graph transformers is tied to the choice of positional encoding: features used to augment the base transformer with information about the graph. There are two primary types of positional encoding: absolute positional encodings (APEs) and relative positional encodings (RPEs). APEs assign feat...
https://proceedings.mlr.press/v235/blanchet24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/blanchet24a/blanchet24a.pdf
https://openreview.net/forum?id=XPP6K57bop
Stability Evaluation through Distributional Perturbation Analysis
https://proceedings.mlr.press/v235/blanchet24a.html
Jose Blanchet, Peng Cui, Jiajin Li, Jiashuo Liu
https://proceedings.mlr.press/v235/blanchet24a.html
ICML 2024
The performance of learning models often deteriorates when deployed in out-of-sample environments. To ensure reliable deployment, we propose a stability evaluation criterion based on distributional perturbations. Conceptually, our stability evaluation criterion is defined as the minimal perturbation required on our obs...
https://proceedings.mlr.press/v235/bleistein24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/bleistein24a/bleistein24a.pdf
https://openreview.net/forum?id=xGlVkBSDdt
Dynamic Survival Analysis with Controlled Latent States
https://proceedings.mlr.press/v235/bleistein24a.html
Linus Bleistein, Van Tuan Nguyen, Adeline Fermanian, Agathe Guilloux
https://proceedings.mlr.press/v235/bleistein24a.html
ICML 2024
We consider the task of learning individual-specific intensities of counting processes from a set of static variables and irregularly sampled time series. We introduce a novel modelization approach in which the intensity is the solution to a controlled differential equation. We first design a neural estimator by buildi...
https://proceedings.mlr.press/v235/blessing24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/blessing24a/blessing24a.pdf
https://openreview.net/forum?id=fVg9YrSllr
Beyond ELBOs: A Large-Scale Evaluation of Variational Methods for Sampling
https://proceedings.mlr.press/v235/blessing24a.html
Denis Blessing, Xiaogang Jia, Johannes Esslinger, Francisco Vargas, Gerhard Neumann
https://proceedings.mlr.press/v235/blessing24a.html
ICML 2024
Monte Carlo methods, Variational Inference, and their combinations play a pivotal role in sampling from intractable probability distributions. However, current studies lack a unified evaluation framework, relying on disparate performance measures and limited method comparisons across diverse tasks, complicating the ass...
https://proceedings.mlr.press/v235/bok24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/bok24a/bok24a.pdf
https://openreview.net/forum?id=KCVCFsPkrm
Shifted Interpolation for Differential Privacy
https://proceedings.mlr.press/v235/bok24a.html
Jinho Bok, Weijie J Su, Jason Altschuler
https://proceedings.mlr.press/v235/bok24a.html
ICML 2024
Noisy gradient descent and its variants are the predominant algorithms for differentially private machine learning. It is a fundamental question to quantify their privacy leakage, yet tight characterizations remain open even in the foundational setting of convex losses. This paper improves over previous analyses by est...
https://proceedings.mlr.press/v235/bombari24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/bombari24a/bombari24a.pdf
https://openreview.net/forum?id=o6N1Bqay0k
How Spurious Features are Memorized: Precise Analysis for Random and NTK Features
https://proceedings.mlr.press/v235/bombari24a.html
Simone Bombari, Marco Mondelli
https://proceedings.mlr.press/v235/bombari24a.html
ICML 2024
Deep learning models are known to overfit and memorize spurious features in the training dataset. While numerous empirical studies have aimed at understanding this phenomenon, a rigorous theoretical framework to quantify it is still missing. In this paper, we consider spurious features that are uncorrelated with the le...
https://proceedings.mlr.press/v235/bombari24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/bombari24b/bombari24b.pdf
https://openreview.net/forum?id=JBaPBPrn93
Towards Understanding the Word Sensitivity of Attention Layers: A Study via Random Features
https://proceedings.mlr.press/v235/bombari24b.html
Simone Bombari, Marco Mondelli
https://proceedings.mlr.press/v235/bombari24b.html
ICML 2024
Understanding the reasons behind the exceptional success of transformers requires a better analysis of why attention layers are suitable for NLP tasks. In particular, such tasks require predictive models to capture contextual meaning which often depends on one or few words, even if the sentence is long. Our work studie...
https://proceedings.mlr.press/v235/bonel24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/bonel24a/bonel24a.pdf
https://openreview.net/forum?id=hdpv6mall8
Position: Machine Learning-powered Assessments of the EU Digital Services Act Aid Quantify Policy Impacts on Online Harms
https://proceedings.mlr.press/v235/bonel24a.html
Eleonora Bonel, Luca Nannini, Davide Bassi, Michele Joshua Maggini
https://proceedings.mlr.press/v235/bonel24a.html
ICML 2024
While machine learning shows promise in automated knowledge generation, current techniques such as large language models and micro-targeted influence operations can be exploited for harmful purposes like the proliferation of disinformation. The European Union’s Digital Services Act (DSA) is an exemplary policy response...
https://proceedings.mlr.press/v235/bordelon24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/bordelon24a/bordelon24a.pdf
https://openreview.net/forum?id=nbOY1OmtRc
A Dynamical Model of Neural Scaling Laws
https://proceedings.mlr.press/v235/bordelon24a.html
Blake Bordelon, Alexander Atanasov, Cengiz Pehlevan
https://proceedings.mlr.press/v235/bordelon24a.html
ICML 2024
On a variety of tasks, the performance of neural networks predictably improves with training time, dataset size and model size across many orders of magnitude. This phenomenon is known as a neural scaling law. Of fundamental importance is the compute-optimal scaling law, which reports the performance as a function of u...
https://proceedings.mlr.press/v235/boschi24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/boschi24a/boschi24a.pdf
https://openreview.net/forum?id=a7MW5kFFOf
A New Computationally Efficient Algorithm to solve Feature Selection for Functional Data Classification in High-dimensional Spaces
https://proceedings.mlr.press/v235/boschi24a.html
Tobia Boschi, Francesca Bonin, Rodrigo Ordonez-Hurtado, Alessandra Pascale, Jonathan P Epperlein
https://proceedings.mlr.press/v235/boschi24a.html
ICML 2024
This paper introduces a novel methodology for Feature Selection for Functional Classification, FSFC, that addresses the challenge of jointly performing feature selection and classification of functional data in scenarios with categorical responses and multivariate longitudinal features. FSFC tackles a newly defined opt...
https://proceedings.mlr.press/v235/bouchard24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/bouchard24a/bouchard24a.pdf
https://openreview.net/forum?id=uQiFsBil3p
Random matrix theory improved Fréchet mean of symmetric positive definite matrices
https://proceedings.mlr.press/v235/bouchard24a.html
Florent Bouchard, Ammar Mian, Malik Tiomoko, Guillaume Ginolhac, Frederic Pascal
https://proceedings.mlr.press/v235/bouchard24a.html
ICML 2024
In this study, we consider the realm of covariance matrices in machine learning, particularly focusing on computing Fréchet means on the manifold of symmetric positive definite matrices, commonly referred to as Karcher or geometric means. Such means are leveraged in numerous machine learning tasks. Relying on advanced ...
https://proceedings.mlr.press/v235/bouchiat24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/bouchiat24a/bouchiat24a.pdf
https://openreview.net/forum?id=0pSTzCnEmi
Improving Neural Additive Models with Bayesian Principles
https://proceedings.mlr.press/v235/bouchiat24a.html
Kouroche Bouchiat, Alexander Immer, Hugo Yèche, Gunnar Ratsch, Vincent Fortuin
https://proceedings.mlr.press/v235/bouchiat24a.html
ICML 2024
Neural additive models (NAMs) enhance the transparency of deep neural networks by handling input features in separate additive sub-networks. However, they lack inherent mechanisms that provide calibrated uncertainties and enable selection of relevant features and interactions. Approaching NAMs from a Bayesian perspecti...
https://proceedings.mlr.press/v235/bounoua24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/bounoua24a/bounoua24a.pdf
https://openreview.net/forum?id=LuhWZ2oJ5L
S$Ω$I: Score-based O-INFORMATION Estimation
https://proceedings.mlr.press/v235/bounoua24a.html
Mustapha Bounoua, Giulio Franzese, Pietro Michiardi
https://proceedings.mlr.press/v235/bounoua24a.html
ICML 2024
The analysis of scientific data and complex multivariate systems requires information quantities that capture relationships among multiple random variables. Recently, new information-theoretic measures have been developed to overcome the shortcomings of classical ones, such as mutual information, that are restricted to...
https://proceedings.mlr.press/v235/bravo24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/bravo24a/bravo24a.pdf
https://openreview.net/forum?id=UjDp4Wkq2V
On dimensionality of feature vectors in MPNNs
https://proceedings.mlr.press/v235/bravo24a.html
César Bravo, Alexander Kozachinskiy, Cristobal Rojas
https://proceedings.mlr.press/v235/bravo24a.html
ICML 2024
We revisit the result of Morris et al. (AAAI’19) that message-passing graphs neural networks (MPNNs) are equal in their distinguishing power to the Weisfeiler–Leman (WL) isomorphism test. Morris et al. show their result with ReLU activation function and $O(n)$-dimensional feature vectors, where $n$ is the size of the g...
https://proceedings.mlr.press/v235/brenner24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/brenner24a/brenner24a.pdf
https://openreview.net/forum?id=b1iurBHDck
Integrating Multimodal Data for Joint Generative Modeling of Complex Dynamics
https://proceedings.mlr.press/v235/brenner24a.html
Manuel Brenner, Florian Hess, Georgia Koppe, Daniel Durstewitz
https://proceedings.mlr.press/v235/brenner24a.html
ICML 2024
Many, if not most, systems of interest in science are naturally described as nonlinear dynamical systems. Empirically, we commonly access these systems through time series measurements. Often such time series may consist of discrete random variables rather than continuous measurements, or may be composed of measurement...
https://proceedings.mlr.press/v235/bressan24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/bressan24a/bressan24a.pdf
https://openreview.net/forum?id=d5tJWH5yCi
Fully-Dynamic Approximate Decision Trees With Worst-Case Update Time Guarantees
https://proceedings.mlr.press/v235/bressan24a.html
Marco Bressan, Mauro Sozio
https://proceedings.mlr.press/v235/bressan24a.html
ICML 2024
We study the problem of maintaining a decision tree in the fully-dynamic setting, where the dataset is updated by an adversarial sequence of insertions and deletions. We present the first algorithm with strong guarantees on both the quality of the tree and the worst-case update time (the maximum time spent between two ...
https://proceedings.mlr.press/v235/brilliantov24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/brilliantov24a/brilliantov24a.pdf
https://openreview.net/forum?id=gQz30hTkRE
Applying language models to algebraic topology: generating simplicial cycles using multi-labeling in Wu’s formula
https://proceedings.mlr.press/v235/brilliantov24a.html
Kirill Brilliantov, Fedor Pavutnitskiy, Dmitry Pasechnyuk, German Magai
https://proceedings.mlr.press/v235/brilliantov24a.html
ICML 2024
Computing homotopy groups of spheres has long been a fundamental objective in algebraic topology. Various theoretical and algorithmic approaches have been developed to tackle this problem. In this paper we take a step towards the goal of comprehending the group-theoretic structure of the generators of these homotopy gr...
https://proceedings.mlr.press/v235/brown24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/brown24a/brown24a.pdf
https://openreview.net/forum?id=igRAPavrrS
Private Gradient Descent for Linear Regression: Tighter Error Bounds and Instance-Specific Uncertainty Estimation
https://proceedings.mlr.press/v235/brown24a.html
Gavin R Brown, Krishnamurthy Dj Dvijotham, Georgina Evans, Daogao Liu, Adam Smith, Abhradeep Guha Thakurta
https://proceedings.mlr.press/v235/brown24a.html
ICML 2024
We provide an improved analysis of standard differentially private gradient descent for linear regression under the squared error loss. Under modest assumptions on the input, we characterize the distribution of the iterate at each time step. Our analysis leads to new results on the algorithm’s accuracy: for a proper fi...
https://proceedings.mlr.press/v235/brown-cohen24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/brown-cohen24a/brown-cohen24a.pdf
https://openreview.net/forum?id=6jmdOTRMIO
Scalable AI Safety via Doubly-Efficient Debate
https://proceedings.mlr.press/v235/brown-cohen24a.html
Jonah Brown-Cohen, Geoffrey Irving, Georgios Piliouras
https://proceedings.mlr.press/v235/brown-cohen24a.html
ICML 2024
The emergence of pre-trained AI systems with powerful capabilities across a diverse and ever-increasing set of complex domains has raised a critical challenge for AI safety as tasks can become too complicated for humans to judge directly. Irving et al (2018). proposed a debate method in this direction with the goal of ...
https://proceedings.mlr.press/v235/bruce24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/bruce24a/bruce24a.pdf
https://openreview.net/forum?id=bJbSbJskOS
Genie: Generative Interactive Environments
https://proceedings.mlr.press/v235/bruce24a.html
Jake Bruce, Michael D Dennis, Ashley Edwards, Jack Parker-Holder, Yuge Shi, Edward Hughes, Matthew Lai, Aditi Mavalankar, Richie Steigerwald, Chris Apps, Yusuf Aytar, Sarah Maria Elisabeth Bechtle, Feryal Behbahani, Stephanie C.Y. Chan, Nicolas Heess, Lucy Gonzalez, Simon Osindero, Sherjil Ozair, Scott Reed, Jingwei Zh...
https://proceedings.mlr.press/v235/bruce24a.html
ICML 2024
We introduce Genie, the first generative interactive environment trained in an unsupervised manner from unlabelled Internet videos. The model can be prompted to generate an endless variety of action-controllable virtual worlds described through text, synthetic images, photographs, and even sketches. At 11B parameters, ...
https://proceedings.mlr.press/v235/bryutkin24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/bryutkin24a/bryutkin24a.pdf
https://openreview.net/forum?id=nYX7I6PsL7
HAMLET: Graph Transformer Neural Operator for Partial Differential Equations
https://proceedings.mlr.press/v235/bryutkin24a.html
Andrey Bryutkin, Jiahao Huang, Zhongying Deng, Guang Yang, Carola-Bibiane Schönlieb, Angelica I Aviles-Rivero
https://proceedings.mlr.press/v235/bryutkin24a.html
ICML 2024
We present a novel graph transformer framework, HAMLET, designed to address the challenges in solving partial differential equations (PDEs) using neural networks. The framework uses graph transformers with modular input encoders to directly incorporate differential equation information into the solution process. This m...
https://proceedings.mlr.press/v235/bu24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/bu24a/bu24a.pdf
https://openreview.net/forum?id=kzz0kn546b
Provably Neural Active Learning Succeeds via Prioritizing Perplexing Samples
https://proceedings.mlr.press/v235/bu24a.html
Dake Bu, Wei Huang, Taiji Suzuki, Ji Cheng, Qingfu Zhang, Zhiqiang Xu, Hau-San Wong
https://proceedings.mlr.press/v235/bu24a.html
ICML 2024
Neural Network-based active learning (NAL) is a cost-effective data selection technique that utilizes neural networks to select and train on a small subset of samples. While existing work successfully develops various effective or theory-justified NAL algorithms, the understanding of the two commonly used query criteri...
https://proceedings.mlr.press/v235/bu24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/bu24b/bu24b.pdf
https://openreview.net/forum?id=6n99bIxb3r
Tackling Prevalent Conditions in Unsupervised Combinatorial Optimization: Cardinality, Minimum, Covering, and More
https://proceedings.mlr.press/v235/bu24b.html
Fanchen Bu, Hyeonsoo Jo, Soo Yong Lee, Sungsoo Ahn, Kijung Shin
https://proceedings.mlr.press/v235/bu24b.html
ICML 2024
Combinatorial optimization (CO) is naturally discrete, making machine-learning techniques based on differentiable optimization inapplicable. Karalias & Loukas (2020) adapted the probabilistic method by Erdős & Spencer (1974), to incorporate CO into differentiable optimization. Their work ignited the research on unsuper...
https://proceedings.mlr.press/v235/bu24c.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/bu24c/bu24c.pdf
https://openreview.net/forum?id=fqeANcjBMT
Differentially Private Bias-Term Fine-tuning of Foundation Models
https://proceedings.mlr.press/v235/bu24c.html
Zhiqi Bu, Yu-Xiang Wang, Sheng Zha, George Karypis
https://proceedings.mlr.press/v235/bu24c.html
ICML 2024
We study the problem of differentially private (DP) fine-tuning of large pre-trained models — a recent privacy-preserving approach suitable for solving downstream tasks with sensitive data. Existing work has demonstrated that high accuracy is possible under strong privacy constraint, yet requires significant computatio...
https://proceedings.mlr.press/v235/buathong24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/buathong24a/buathong24a.pdf
https://openreview.net/forum?id=scMAQ3mFAA
Bayesian Optimization of Function Networks with Partial Evaluations
https://proceedings.mlr.press/v235/buathong24a.html
Poompol Buathong, Jiayue Wan, Raul Astudillo, Sam Daulton, Maximilian Balandat, Peter I. Frazier
https://proceedings.mlr.press/v235/buathong24a.html
ICML 2024
Bayesian optimization is a powerful framework for optimizing functions that are expensive or time-consuming to evaluate. Recent work has considered Bayesian optimization of function networks (BOFN), where the objective function is given by a network of functions, each taking as input the output of previous nodes in the...
https://proceedings.mlr.press/v235/buchholz24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/buchholz24a/buchholz24a.pdf
https://openreview.net/forum?id=GyV33H5Uuk
Robustness of Nonlinear Representation Learning
https://proceedings.mlr.press/v235/buchholz24a.html
Simon Buchholz, Bernhard Schölkopf
https://proceedings.mlr.press/v235/buchholz24a.html
ICML 2024
We study the problem of unsupervised representation learning in slightly misspecified settings, and thus formalize the study of robustness of nonlinear representation learning. We focus on the case where the mixing is close to a local isometry in a suitable distance and show based on existing rigidity results that the ...
https://proceedings.mlr.press/v235/bui24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/bui24a/bui24a.pdf
https://openreview.net/forum?id=lon750Kf7n
Density-Softmax: Efficient Test-time Model for Uncertainty Estimation and Robustness under Distribution Shifts
https://proceedings.mlr.press/v235/bui24a.html
Ha Manh Bui, Anqi Liu
https://proceedings.mlr.press/v235/bui24a.html
ICML 2024
Sampling-based methods, e.g., Deep Ensembles and Bayesian Neural Nets have become promising approaches to improve the quality of uncertainty estimation and robust generalization. However, they suffer from a large model size and high latency at test time, which limits the scalability needed for low-resource devices and ...
https://proceedings.mlr.press/v235/bui24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/bui24b/bui24b.pdf
https://openreview.net/forum?id=2T00oYk54P
Explaining Graph Neural Networks via Structure-aware Interaction Index
https://proceedings.mlr.press/v235/bui24b.html
Ngoc Bui, Hieu Trung Nguyen, Viet Anh Nguyen, Rex Ying
https://proceedings.mlr.press/v235/bui24b.html
ICML 2024
The Shapley value is a prominent tool for interpreting black-box machine learning models thanks to its strong theoretical foundation. However, for models with structured inputs, such as graph neural networks, existing Shapley-based explainability approaches either focus solely on node-wise importance or neglect the gra...
https://proceedings.mlr.press/v235/bulian24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/bulian24a/bulian24a.pdf
https://openreview.net/forum?id=ScIHQoTUjT
Assessing Large Language Models on Climate Information
https://proceedings.mlr.press/v235/bulian24a.html
Jannis Bulian, Mike S. Schäfer, Afra Amini, Heidi Lam, Massimiliano Ciaramita, Ben Gaiarin, Michelle Chen Huebscher, Christian Buck, Niels G. Mede, Markus Leippold, Nadine Strauss
https://proceedings.mlr.press/v235/bulian24a.html
ICML 2024
As Large Language Models (LLMs) rise in popularity, it is necessary to assess their capability in critically relevant domains. We present a comprehensive evaluation framework, grounded in science communication research, to assess LLM responses to questions about climate change. Our framework emphasizes both presentatio...
https://proceedings.mlr.press/v235/burns24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/burns24a/burns24a.pdf
https://openreview.net/forum?id=l0OGoZPZuC
Semantically-correlated memories in a dense associative model
https://proceedings.mlr.press/v235/burns24a.html
Thomas F Burns
https://proceedings.mlr.press/v235/burns24a.html
ICML 2024
I introduce a novel associative memory model named Correlated Dense Associative Memory (CDAM), which integrates both auto- and hetero-association in a unified framework for continuous-valued memory patterns. Employing an arbitrary graph structure to semantically link memory patterns, CDAM is theoretically and numerical...
https://proceedings.mlr.press/v235/burns24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/burns24b/burns24b.pdf
https://openreview.net/forum?id=ghNRg2mEgN
Weak-to-Strong Generalization: Eliciting Strong Capabilities With Weak Supervision
https://proceedings.mlr.press/v235/burns24b.html
Collin Burns, Pavel Izmailov, Jan Hendrik Kirchner, Bowen Baker, Leo Gao, Leopold Aschenbrenner, Yining Chen, Adrien Ecoffet, Manas Joglekar, Jan Leike, Ilya Sutskever, Jeffrey Wu
https://proceedings.mlr.press/v235/burns24b.html
ICML 2024
Widely used alignment techniques, such as reinforcement learning from human feedback (RLHF), rely on the ability of humans to supervise model behavior—for example, to evaluate whether a model faithfully followed instructions or generated safe outputs. However, future superhuman models will behave in complex ways too di...
https://proceedings.mlr.press/v235/butt24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/butt24a/butt24a.pdf
https://openreview.net/forum?id=SXVn5IFsrs
CodeIt: Self-Improving Language Models with Prioritized Hindsight Replay
https://proceedings.mlr.press/v235/butt24a.html
Natasha Butt, Blazej Manczak, Auke Wiggers, Corrado Rainone, David W. Zhang, Michaël Defferrard, Taco Cohen
https://proceedings.mlr.press/v235/butt24a.html
ICML 2024
Large language models are increasingly solving tasks that are commonly believed to require human-level reasoning ability. However, these models still perform very poorly on benchmarks of general intelligence such as the Abstraction and Reasoning Corpus (ARC). In this paper, we approach the ARC as a programming-by-examp...
https://proceedings.mlr.press/v235/buzaglo24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/buzaglo24a/buzaglo24a.pdf
https://openreview.net/forum?id=3eHNvPHL9Z
How Uniform Random Weights Induce Non-uniform Bias: Typical Interpolating Neural Networks Generalize with Narrow Teachers
https://proceedings.mlr.press/v235/buzaglo24a.html
Gon Buzaglo, Itamar Harel, Mor Shpigel Nacson, Alon Brutzkus, Nathan Srebro, Daniel Soudry
https://proceedings.mlr.press/v235/buzaglo24a.html
ICML 2024
A main theoretical puzzle is why over-parameterized Neural Networks (NNs) generalize well when trained to zero loss (i.e., so they interpolate the data). Usually, the NN is trained with Stochastic Gradient Descent (SGD) or one of its variants. However, recent empirical work examined the generalization of a random NN th...
https://proceedings.mlr.press/v235/byambadalai24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/byambadalai24a/byambadalai24a.pdf
https://openreview.net/forum?id=RDofzHLuX4
Estimating Distributional Treatment Effects in Randomized Experiments: Machine Learning for Variance Reduction
https://proceedings.mlr.press/v235/byambadalai24a.html
Undral Byambadalai, Tatsushi Oka, Shota Yasui
https://proceedings.mlr.press/v235/byambadalai24a.html
ICML 2024
We propose a novel regression adjustment method designed for estimating distributional treatment effect parameters in randomized experiments. Randomized experiments have been extensively used to estimate treatment effects in various scientific fields. However, to gain deeper insights, it is essential to estimate distri...
https://proceedings.mlr.press/v235/cabannes24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/cabannes24a/cabannes24a.pdf
https://openreview.net/forum?id=A9fLbXLRTK
Learning Associative Memories with Gradient Descent
https://proceedings.mlr.press/v235/cabannes24a.html
Vivien Cabannes, Berfin Simsek, Alberto Bietti
https://proceedings.mlr.press/v235/cabannes24a.html
ICML 2024
This work focuses on the training dynamics of one associative memory module storing outer products of token embeddings. We reduce this problem to the study of a system of particles, which interact according to properties of the data distribution and correlations between embeddings. Through theory and experiments, we pr...