abs
stringlengths
44
64
Download PDF
stringlengths
75
115
OpenReview
stringlengths
42
42
title
stringlengths
15
148
url
stringlengths
44
64
authors
stringlengths
6
903
detail_url
stringlengths
44
64
tags
stringclasses
1 value
abstract
stringlengths
422
5.84k
https://proceedings.mlr.press/v235/cachet24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/cachet24a/cachet24a.pdf
https://openreview.net/forum?id=ZrM67ZZ5vj
Bridging Environments and Language with Rendering Functions and Vision-Language Models
https://proceedings.mlr.press/v235/cachet24a.html
Theo Cachet, Christopher R Dance, Olivier Sigaud
https://proceedings.mlr.press/v235/cachet24a.html
ICML 2024
Vision-language models (VLMs) have tremendous potential for grounding language, and thus enabling language-conditioned agents (LCAs) to perform diverse tasks specified with text. This has motivated the study of LCAs based on reinforcement learning (RL) with rewards given by rendering images of an environment and evalua...
https://proceedings.mlr.press/v235/cai24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/cai24a/cai24a.pdf
https://openreview.net/forum?id=PnyYgWMMwj
Vocabulary for Universal Approximation: A Linguistic Perspective of Mapping Compositions
https://proceedings.mlr.press/v235/cai24a.html
Yongqiang Cai
https://proceedings.mlr.press/v235/cai24a.html
ICML 2024
In recent years, deep learning-based sequence modelings, such as language models, have received much attention and success, which pushes researchers to explore the possibility of transforming non-sequential problems into a sequential form. Following this thought, deep neural networks can be represented as composite fun...
https://proceedings.mlr.press/v235/cai24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/cai24b/cai24b.pdf
https://openreview.net/forum?id=PEpbUobfJv
Medusa: Simple LLM Inference Acceleration Framework with Multiple Decoding Heads
https://proceedings.mlr.press/v235/cai24b.html
Tianle Cai, Yuhong Li, Zhengyang Geng, Hongwu Peng, Jason D. Lee, Deming Chen, Tri Dao
https://proceedings.mlr.press/v235/cai24b.html
ICML 2024
Large Language Models (LLMs) employ auto-regressive decoding that requires sequential computation, with each step reliant on the previous one’s output. This creates a bottleneck as each step necessitates moving the full model parameters from High-Bandwidth Memory (HBM) to the accelerator’s cache. While methods such as ...
https://proceedings.mlr.press/v235/cai24c.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/cai24c/cai24c.pdf
https://openreview.net/forum?id=YlcSyCz21c
Enhancing Cross-Modal Fine-Tuning with Gradually Intermediate Modality Generation
https://proceedings.mlr.press/v235/cai24c.html
Lincan Cai, Shuang Li, Wenxuan Ma, Jingxuan Kang, Binhui Xie, Zixun Sun, Chengwei Zhu
https://proceedings.mlr.press/v235/cai24c.html
ICML 2024
Large-scale pretrained models have proven immensely valuable in handling data-intensive modalities like text and image. However, fine-tuning these models for certain specialized modalities, such as protein sequence and cosmic ray, poses challenges due to the significant modality discrepancy and scarcity of labeled data...
https://proceedings.mlr.press/v235/cai24d.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/cai24d/cai24d.pdf
https://openreview.net/forum?id=bplNmU2ROC
Batch and match: black-box variational inference with a score-based divergence
https://proceedings.mlr.press/v235/cai24d.html
Diana Cai, Chirag Modi, Loucas Pillaud-Vivien, Charles Margossian, Robert M. Gower, David Blei, Lawrence K. Saul
https://proceedings.mlr.press/v235/cai24d.html
ICML 2024
Most leading implementations of black-box variational inference (BBVI) are based on optimizing a stochastic evidence lower bound (ELBO). But such approaches to BBVI often converge slowly due to the high variance of their gradient estimates and their sensitivity to hyperparameters. In this work, we propose batch and mat...
https://proceedings.mlr.press/v235/cai24e.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/cai24e/cai24e.pdf
https://openreview.net/forum?id=9vKRhnflAs
Flextron: Many-in-One Flexible Large Language Model
https://proceedings.mlr.press/v235/cai24e.html
Ruisi Cai, Saurav Muralidharan, Greg Heinrich, Hongxu Yin, Zhangyang Wang, Jan Kautz, Pavlo Molchanov
https://proceedings.mlr.press/v235/cai24e.html
ICML 2024
Training modern LLMs is extremely resource intensive, and customizing them for various deployment scenarios characterized by limited compute and memory resources through repeated training is impractical. In this paper, we introduce Flextron, a network architecture and post-training model optimization framework supporti...
https://proceedings.mlr.press/v235/cai24f.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/cai24f/cai24f.pdf
https://openreview.net/forum?id=EK7fuAMNoI
Accelerated Algorithms for Constrained Nonconvex-Nonconcave Min-Max Optimization and Comonotone Inclusion
https://proceedings.mlr.press/v235/cai24f.html
Yang Cai, Argyris Oikonomou, Weiqiang Zheng
https://proceedings.mlr.press/v235/cai24f.html
ICML 2024
We study constrained comonotone min-max optimization, a structured class of nonconvex-nonconcave min-max optimization problems, and their generalization to comonotone inclusion. In our first contribution, we extend the Extra Anchored Gradient (EAG) algorithm, originally proposed by Yoon and Ryu (2021) for unconstrained...
https://proceedings.mlr.press/v235/cai24g.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/cai24g/cai24g.pdf
https://openreview.net/forum?id=NUlyqMyhO9
LoCoCo: Dropping In Convolutions for Long Context Compression
https://proceedings.mlr.press/v235/cai24g.html
Ruisi Cai, Yuandong Tian, Zhangyang Wang, Beidi Chen
https://proceedings.mlr.press/v235/cai24g.html
ICML 2024
This paper tackles the memory hurdle of of processing long context sequences in Large Language Models (LLMs), by presenting a novel approach, Dropping In Convolutions for Long Context Compression (LoCoCo). LoCoCo employs only a fixed-size Key-Value (KV) cache, and can enhance efficiency in both inference and fine-tunin...
https://proceedings.mlr.press/v235/cai24h.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/cai24h/cai24h.pdf
https://openreview.net/forum?id=YB1O99gK7b
On Gradient-like Explanation under a Black-box Setting: When Black-box Explanations Become as Good as White-box
https://proceedings.mlr.press/v235/cai24h.html
Yi Cai, Gerhard Wunder
https://proceedings.mlr.press/v235/cai24h.html
ICML 2024
Attribution methods shed light on the explainability of data-driven approaches such as deep learning models by uncovering the most influential features in a to-be-explained decision. While determining feature attributions via gradients delivers promising results, the internal access required for acquiring gradients can...
https://proceedings.mlr.press/v235/cai24i.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/cai24i/cai24i.pdf
https://openreview.net/forum?id=4sikyurTLX
Sample-specific Masks for Visual Reprogramming-based Prompting
https://proceedings.mlr.press/v235/cai24i.html
Chengyi Cai, Zesheng Ye, Lei Feng, Jianzhong Qi, Feng Liu
https://proceedings.mlr.press/v235/cai24i.html
ICML 2024
Visual reprogramming (VR) is a prompting technique that aims to re-purpose a pre-trained model (e.g., a classifier on ImageNet) to target tasks (e.g., medical data prediction) by learning a small-scale pattern added into input images instead of tuning considerable parameters within the model. The location of the patter...
https://proceedings.mlr.press/v235/calandriello24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/calandriello24a/calandriello24a.pdf
https://openreview.net/forum?id=2RQqg2Y7Y6
Human Alignment of Large Language Models through Online Preference Optimisation
https://proceedings.mlr.press/v235/calandriello24a.html
Daniele Calandriello, Zhaohan Daniel Guo, Remi Munos, Mark Rowland, Yunhao Tang, Bernardo Avila Pires, Pierre Harvey Richemond, Charline Le Lan, Michal Valko, Tianqi Liu, Rishabh Joshi, Zeyu Zheng, Bilal Piot
https://proceedings.mlr.press/v235/calandriello24a.html
ICML 2024
Ensuring alignment of language model’s outputs with human preferences is critical to guarantee a useful, safe, and pleasant user experience. Thus, human alignment has been extensively studied recently and several methods such as Reinforcement Learning from Human Feedback (RLHF), Direct Policy Optimisation (DPO) and Seq...
https://proceedings.mlr.press/v235/calvo-ordonez24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/calvo-ordonez24a/calvo-ordonez24a.pdf
https://openreview.net/forum?id=jNab9mXEyj
Partially Stochastic Infinitely Deep Bayesian Neural Networks
https://proceedings.mlr.press/v235/calvo-ordonez24a.html
Sergio Calvo Ordoñez, Matthieu Meunier, Francesco Piatti, Yuantao Shi
https://proceedings.mlr.press/v235/calvo-ordonez24a.html
ICML 2024
In this paper, we present Partially Stochastic Infinitely Deep Bayesian Neural Networks, a novel family of architectures that integrates partial stochasticity into the framework of infinitely deep neural networks. Our new class of architectures is designed to improve the computational efficiency of existing architectur...
https://proceedings.mlr.press/v235/campbell24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/campbell24a/campbell24a.pdf
https://openreview.net/forum?id=kQwSbv0BR4
Generative Flows on Discrete State-Spaces: Enabling Multimodal Flows with Applications to Protein Co-Design
https://proceedings.mlr.press/v235/campbell24a.html
Andrew Campbell, Jason Yim, Regina Barzilay, Tom Rainforth, Tommi Jaakkola
https://proceedings.mlr.press/v235/campbell24a.html
ICML 2024
Combining discrete and continuous data is an important capability for generative models. We present Discrete Flow Models (DFMs), a new flow-based model of discrete data that provides the missing link in enabling flow-based generative models to be applied to multimodal continuous and discrete data problems. Our key insi...
https://proceedings.mlr.press/v235/candido-ramos24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/candido-ramos24a/candido-ramos24a.pdf
https://openreview.net/forum?id=JAfIDm7NED
Mimicking Better by Matching the Approximate Action Distribution
https://proceedings.mlr.press/v235/candido-ramos24a.html
Joao Candido Ramos, Lionel Blondé, Naoya Takeishi, Alexandros Kalousis
https://proceedings.mlr.press/v235/candido-ramos24a.html
ICML 2024
In this paper, we introduce MAAD, a novel, sample-efficient on-policy algorithm for Imitation Learning from Observations. MAAD utilizes a surrogate reward signal, which can be derived from various sources such as adversarial games, trajectory matching objectives, or optimal transport criteria. To compensate for the non...
https://proceedings.mlr.press/v235/canturk24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/canturk24a/canturk24a.pdf
https://openreview.net/forum?id=UTSCK582Yo
Graph Positional and Structural Encoder
https://proceedings.mlr.press/v235/canturk24a.html
Semih Cantürk, Renming Liu, Olivier Lapointe-Gagné, Vincent Létourneau, Guy Wolf, Dominique Beaini, Ladislav Rampášek
https://proceedings.mlr.press/v235/canturk24a.html
ICML 2024
Positional and structural encodings (PSE) enable better identifiability of nodes within a graph, rendering them essential tools for empowering modern GNNs, and in particular graph Transformers. However, designing PSEs that work optimally for all graph prediction tasks is a challenging and unsolved problem. Here, we pre...
https://proceedings.mlr.press/v235/cao24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/cao24a/cao24a.pdf
https://openreview.net/forum?id=LJcIIhqGDN
Successor Features for Efficient Multi-Subject Controlled Text Generation
https://proceedings.mlr.press/v235/cao24a.html
Meng Cao, Mehdi Fatemi, Jackie Ck Cheung, Samira Shabanian
https://proceedings.mlr.press/v235/cao24a.html
ICML 2024
While large language models (LLMs) have achieved impressive performance in generating fluent and realistic text, controlling the generated text so that it exhibits properties such as safety, factuality, and non-toxicity remains challenging. Existing decoding-based controllable text generation methods are static in term...
https://proceedings.mlr.press/v235/cao24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/cao24b/cao24b.pdf
https://openreview.net/forum?id=PAbkWU0KDG
Limited Preference Aided Imitation Learning from Imperfect Demonstrations
https://proceedings.mlr.press/v235/cao24b.html
Xingchen Cao, Fan-Ming Luo, Junyin Ye, Tian Xu, Zhilong Zhang, Yang Yu
https://proceedings.mlr.press/v235/cao24b.html
ICML 2024
Imitation learning mimics high-quality policies from expert data for sequential decision-making tasks. However, its efficacy is hindered in scenarios where optimal demonstrations are unavailable, and only imperfect demonstrations are present. To address this issue, introducing additional limited human preferences is a ...
https://proceedings.mlr.press/v235/cao24c.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/cao24c/cao24c.pdf
https://openreview.net/forum?id=LYpGLrC4oq
Predictive Dynamic Fusion
https://proceedings.mlr.press/v235/cao24c.html
Bing Cao, Yinan Xia, Yi Ding, Changqing Zhang, Qinghua Hu
https://proceedings.mlr.press/v235/cao24c.html
ICML 2024
Multimodal fusion is crucial in joint decision-making systems for rendering holistic judgments. Since multimodal data changes in open environments, dynamic fusion has emerged and achieved remarkable progress in numerous applications. However, most existing dynamic multimodal fusion methods lack theoretical guarantees a...
https://proceedings.mlr.press/v235/cao24d.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/cao24d/cao24d.pdf
https://openreview.net/forum?id=xZO7SmM12y
Envisioning Outlier Exposure by Large Language Models for Out-of-Distribution Detection
https://proceedings.mlr.press/v235/cao24d.html
Chentao Cao, Zhun Zhong, Zhanke Zhou, Yang Liu, Tongliang Liu, Bo Han
https://proceedings.mlr.press/v235/cao24d.html
ICML 2024
Detecting out-of-distribution (OOD) samples is essential when deploying machine learning models in open-world scenarios. Zero-shot OOD detection, requiring no training on in-distribution (ID) data, has been possible with the advent of vision-language models like CLIP. Existing methods build a text-based classifier with...
https://proceedings.mlr.press/v235/caragiannis24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/caragiannis24a/caragiannis24a.pdf
https://openreview.net/forum?id=nsjfoziR5j
Can a Few Decide for Many? The Metric Distortion of Sortition
https://proceedings.mlr.press/v235/caragiannis24a.html
Ioannis Caragiannis, Evi Micha, Jannik Peters
https://proceedings.mlr.press/v235/caragiannis24a.html
ICML 2024
Recent works have studied the design of algorithms for selecting representative sortition panels. However, the most central question remains unaddressed: Do these panels reflect the entire population’s opinion? We present a positive answer by adopting the concept of metric distortion from computational social choice, w...
https://proceedings.mlr.press/v235/carlini24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/carlini24a/carlini24a.pdf
https://openreview.net/forum?id=VE3yWXt3KB
Stealing part of a production language model
https://proceedings.mlr.press/v235/carlini24a.html
Nicholas Carlini, Daniel Paleka, Krishnamurthy Dj Dvijotham, Thomas Steinke, Jonathan Hayase, A. Feder Cooper, Katherine Lee, Matthew Jagielski, Milad Nasr, Arthur Conmy, Eric Wallace, David Rolnick, Florian Tramèr
https://proceedings.mlr.press/v235/carlini24a.html
ICML 2024
We introduce the first model-stealing attack that extracts precise, nontrivial information from black-box production language models like OpenAI’s ChatGPT or Google’s PaLM-2. Specifically, our attack recovers the embedding projection layer (up to symmetries) of a transformer model, given typical API access. For under $...
https://proceedings.mlr.press/v235/carroll24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/carroll24a/carroll24a.pdf
https://openreview.net/forum?id=itYGbe0Cs1
AI Alignment with Changing and Influenceable Reward Functions
https://proceedings.mlr.press/v235/carroll24a.html
Micah Carroll, Davis Foote, Anand Siththaranjan, Stuart Russell, Anca Dragan
https://proceedings.mlr.press/v235/carroll24a.html
ICML 2024
Existing AI alignment approaches assume that preferences are static, which is unrealistic: our preferences change, and may even be influenced by our interactions with AI systems themselves. To clarify the consequences of incorrectly assuming static preferences, we introduce Dynamic Reward Markov Decision Processes (DR-...
https://proceedings.mlr.press/v235/cassel24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/cassel24a/cassel24a.pdf
https://openreview.net/forum?id=hXQOO6VsxH
Near-Optimal Regret in Linear MDPs with Aggregate Bandit Feedback
https://proceedings.mlr.press/v235/cassel24a.html
Asaf Cassel, Haipeng Luo, Aviv Rosenberg, Dmitry Sotnikov
https://proceedings.mlr.press/v235/cassel24a.html
ICML 2024
In many real-world applications, it is hard to provide a reward signal in each step of a Reinforcement Learning (RL) process and more natural to give feedback when an episode ends. To this end, we study the recently proposed model of RL with Aggregate Bandit Feedback (RL-ABF), where the agent only observes the sum of r...
https://proceedings.mlr.press/v235/castiglioni24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/castiglioni24a/castiglioni24a.pdf
https://openreview.net/forum?id=shzEkKPrsn
Online Learning under Budget and ROI Constraints via Weak Adaptivity
https://proceedings.mlr.press/v235/castiglioni24a.html
Matteo Castiglioni, Andrea Celli, Christian Kroer
https://proceedings.mlr.press/v235/castiglioni24a.html
ICML 2024
We study online learning problems in which a decision maker has to make a sequence of costly decisions, with the goal of maximizing their expected reward while adhering to budget and return-on-investment (ROI) constraints. Existing primal-dual algorithms designed for constrained online learning problems under adversari...
https://proceedings.mlr.press/v235/castin24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/castin24a/castin24a.pdf
https://openreview.net/forum?id=aP0H8A1ywk
How Smooth Is Attention?
https://proceedings.mlr.press/v235/castin24a.html
Valérie Castin, Pierre Ablin, Gabriel Peyré
https://proceedings.mlr.press/v235/castin24a.html
ICML 2024
Self-attention and masked self-attention are at the heart of Transformers’ outstanding success. Still, our mathematical understanding of attention, in particular of its Lipschitz properties — which are key when it comes to analyzing robustness and expressive power — is incomplete. We provide a detailed study of the Lip...
https://proceedings.mlr.press/v235/catalano24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/catalano24a/catalano24a.pdf
https://openreview.net/forum?id=cmy38XZlJu
Hierarchical Integral Probability Metrics: A distance on random probability measures with low sample complexity
https://proceedings.mlr.press/v235/catalano24a.html
Marta Catalano, Hugo Lavenant
https://proceedings.mlr.press/v235/catalano24a.html
ICML 2024
Random probabilities are a key component to many nonparametric methods in Statistics and Machine Learning. To quantify comparisons between different laws of random probabilities several works are starting to use the elegant Wasserstein over Wasserstein distance. In this paper we prove that the infinite dimensionality o...
https://proceedings.mlr.press/v235/cattaneo24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/cattaneo24a/cattaneo24a.pdf
https://openreview.net/forum?id=y8YovS0lOg
On the Implicit Bias of Adam
https://proceedings.mlr.press/v235/cattaneo24a.html
Matias D. Cattaneo, Jason Matthew Klusowski, Boris Shigida
https://proceedings.mlr.press/v235/cattaneo24a.html
ICML 2024
In previous literature, backward error analysis was used to find ordinary differential equations (ODEs) approximating the gradient descent trajectory. It was found that finite step sizes implicitly regularize solutions because terms appearing in the ODEs penalize the two-norm of the loss gradients. We prove that the ex...
https://proceedings.mlr.press/v235/celik24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/celik24a/celik24a.pdf
https://openreview.net/forum?id=9ZkUFSwlUH
Acquiring Diverse Skills using Curriculum Reinforcement Learning with Mixture of Experts
https://proceedings.mlr.press/v235/celik24a.html
Onur Celik, Aleksandar Taranovic, Gerhard Neumann
https://proceedings.mlr.press/v235/celik24a.html
ICML 2024
Reinforcement learning (RL) is a powerful approach for acquiring a good-performing policy. However, learning diverse skills is challenging in RL due to the commonly used Gaussian policy parameterization. We propose Diverse Skill Learning (Di-SkilL), an RL method for learning diverse skills using Mixture of Experts, whe...
https://proceedings.mlr.press/v235/celis24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/celis24a/celis24a.pdf
https://openreview.net/forum?id=9QRcp2ubDt
Centralized Selection with Preferences in the Presence of Biases
https://proceedings.mlr.press/v235/celis24a.html
L. Elisa Celis, Amit Kumar, Nisheeth K. Vishnoi, Andrew Xu
https://proceedings.mlr.press/v235/celis24a.html
ICML 2024
This paper considers the scenario in which there are multiple institutions, each with a limited capacity for candidates, and candidates, each with preferences over the institutions. A central entity evaluates the utility of each candidate to the institutions, and the goal is to select candidates for each institution in...
https://proceedings.mlr.press/v235/cen24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/cen24a/cen24a.pdf
https://openreview.net/forum?id=o1gS6MNAw8
Using Left and Right Brains Together: Towards Vision and Language Planning
https://proceedings.mlr.press/v235/cen24a.html
Jun Cen, Chenfei Wu, Xiao Liu, Shengming Yin, Yixuan Pei, Jinglong Yang, Qifeng Chen, Nan Duan, Jianguo Zhang
https://proceedings.mlr.press/v235/cen24a.html
ICML 2024
Large Language Models (LLMs) and Large Multi-modality Models (LMMs) have demonstrated remarkable decision masking capabilities on a variety of tasks. However, they inherently operate planning within the language space, lacking the vision and spatial imagination ability. In contrast, humans utilize both left and right h...
https://proceedings.mlr.press/v235/cen24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/cen24b/cen24b.pdf
https://openreview.net/forum?id=JNHK11bAGl
Feasibility Consistent Representation Learning for Safe Reinforcement Learning
https://proceedings.mlr.press/v235/cen24b.html
Zhepeng Cen, Yihang Yao, Zuxin Liu, Ding Zhao
https://proceedings.mlr.press/v235/cen24b.html
ICML 2024
In the field of safe reinforcement learning (RL), finding a balance between satisfying safety constraints and optimizing reward performance presents a significant challenge. A key obstacle in this endeavor is the estimation of safety constraints, which is typically more difficult than estimating a reward metric due to ...
https://proceedings.mlr.press/v235/cetin24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/cetin24a/cetin24a.pdf
https://openreview.net/forum?id=japBn31gXC
Simple Ingredients for Offline Reinforcement Learning
https://proceedings.mlr.press/v235/cetin24a.html
Edoardo Cetin, Andrea Tirinzoni, Matteo Pirotta, Alessandro Lazaric, Yann Ollivier, Ahmed Touati
https://proceedings.mlr.press/v235/cetin24a.html
ICML 2024
Offline reinforcement learning algorithms have proven effective on datasets highly connected to the target downstream task. Yet, by leveraging a novel testbed (MOOD) in which trajectories come from heterogeneous sources, we show that existing methods struggle with diverse data: their performance considerably deteriorat...
https://proceedings.mlr.press/v235/cha24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/cha24a/cha24a.pdf
https://openreview.net/forum?id=9jXS07TIBH
Regularizing with Pseudo-Negatives for Continual Self-Supervised Learning
https://proceedings.mlr.press/v235/cha24a.html
Sungmin Cha, Kyunghyun Cho, Taesup Moon
https://proceedings.mlr.press/v235/cha24a.html
ICML 2024
We introduce a novel Pseudo-Negative Regularization (PNR) framework for effective continual self-supervised learning (CSSL). Our PNR leverages pseudo-negatives obtained through model-based augmentation in a way that newly learned representations may not contradict what has been learned in the past. Specifically, for th...
https://proceedings.mlr.press/v235/chadha24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/chadha24a/chadha24a.pdf
https://openreview.net/forum?id=FVmqX0sYz9
Auditing Private Prediction
https://proceedings.mlr.press/v235/chadha24a.html
Karan Chadha, Matthew Jagielski, Nicolas Papernot, Christopher A. Choquette-Choo, Milad Nasr
https://proceedings.mlr.press/v235/chadha24a.html
ICML 2024
Differential privacy (DP) offers a theoretical upper bound on the potential privacy leakage of an algorithm, while empirical auditing establishes a practical lower bound. Auditing techniques exist for DP training algorithms. However machine learning can also be made private at inference. We propose the first framework ...
https://proceedings.mlr.press/v235/chakraborty24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/chakraborty24a/chakraborty24a.pdf
https://openreview.net/forum?id=CJbhtpcyGL
Position: On the Possibilities of AI-Generated Text Detection
https://proceedings.mlr.press/v235/chakraborty24a.html
Souradip Chakraborty, Amrit Bedi, Sicheng Zhu, Bang An, Dinesh Manocha, Furong Huang
https://proceedings.mlr.press/v235/chakraborty24a.html
ICML 2024
Our study addresses the challenge of distinguishing human-written text from Large Language Model (LLM) outputs. We provide evidence that this differentiation is consistently feasible, except when human and machine text distributions are indistinguishable across their entire support. Employing information theory, we sho...
https://proceedings.mlr.press/v235/chakraborty24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/chakraborty24b/chakraborty24b.pdf
https://openreview.net/forum?id=8tzjEMF0Vq
MaxMin-RLHF: Alignment with Diverse Human Preferences
https://proceedings.mlr.press/v235/chakraborty24b.html
Souradip Chakraborty, Jiahao Qiu, Hui Yuan, Alec Koppel, Dinesh Manocha, Furong Huang, Amrit Bedi, Mengdi Wang
https://proceedings.mlr.press/v235/chakraborty24b.html
ICML 2024
Reinforcement Learning from Human Feedback (RLHF) aligns language models to human preferences by employing a singular reward model derived from preference data. However, the single reward model overlooks the rich diversity of human preferences inherent in data collected from multiple users. In this work, we first deriv...
https://proceedings.mlr.press/v235/chan24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/chan24a/chan24a.pdf
https://openreview.net/forum?id=eyxVRMrZ4m
Dense Reward for Free in Reinforcement Learning from Human Feedback
https://proceedings.mlr.press/v235/chan24a.html
Alex James Chan, Hao Sun, Samuel Holt, Mihaela Van Der Schaar
https://proceedings.mlr.press/v235/chan24a.html
ICML 2024
Reinforcement Learning from Human Feedback (RLHF) has been credited as the key advance that has allowed Large Language Models (LLMs) to effectively follow instructions and produce useful assistance. Classically, this involves generating completions from the LLM in response to a query before using a separate reward mode...
https://proceedings.mlr.press/v235/chan24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/chan24b/chan24b.pdf
https://openreview.net/forum?id=Q8uJyOwOsd
Scribble-Supervised Semantic Segmentation with Prototype-based Feature Augmentation
https://proceedings.mlr.press/v235/chan24b.html
Guiyang Chan, Pengcheng Zhang, Hai Dong, Shunhui Ji, Bainian Chen
https://proceedings.mlr.press/v235/chan24b.html
ICML 2024
Scribble-supervised semantic segmentation presents a cost-effective training method that utilizes annotations generated through scribbling. It is valued in attaining high performance while minimizing annotation costs, which has made it highly regarded among researchers. Scribble supervision propagates information from ...
https://proceedings.mlr.press/v235/chang24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/chang24a/chang24a.pdf
https://openreview.net/forum?id=fywWm06IGn
Feature Importance Disparities for Data Bias Investigations
https://proceedings.mlr.press/v235/chang24a.html
Peter W Chang, Leor Fishman, Seth Neel
https://proceedings.mlr.press/v235/chang24a.html
ICML 2024
It is widely held that one cause of downstream bias in classifiers is bias present in the training data. Rectifying such biases may involve context-dependent interventions such as training separate models on subgroups, removing features with bias in the collection process, or even conducting real-world experiments to a...
https://proceedings.mlr.press/v235/chang24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/chang24b/chang24b.pdf
https://openreview.net/forum?id=KYrAZSbEv6
Inferring Dynamic Networks from Marginals with Iterative Proportional Fitting
https://proceedings.mlr.press/v235/chang24b.html
Serina Chang, Frederic Koehler, Zhaonan Qu, Jure Leskovec, Johan Ugander
https://proceedings.mlr.press/v235/chang24b.html
ICML 2024
A common network inference problem, arising from real-world data constraints, is how to infer a dynamic network from its time-aggregated adjacency matrix and time-varying marginals (i.e., row and column sums). Prior approaches to this problem have repurposed the classic iterative proportional fitting (IPF) procedure, a...
https://proceedings.mlr.press/v235/chang24c.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/chang24c/chang24c.pdf
https://openreview.net/forum?id=MjGCD8wk1k
LaMAGIC: Language-Model-based Topology Generation for Analog Integrated Circuits
https://proceedings.mlr.press/v235/chang24c.html
Chen-Chia Chang, Yikang Shen, Shaoze Fan, Jing Li, Shun Zhang, Ningyuan Cao, Yiran Chen, Xin Zhang
https://proceedings.mlr.press/v235/chang24c.html
ICML 2024
In the realm of electronic and electrical engineering, automation of analog circuit is increasingly vital given the complexity and customized requirements of modern applications. However, existing methods only develop search-based algorithms that require many simulation iterations to design a custom circuit topology, w...
https://proceedings.mlr.press/v235/chang24d.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/chang24d/chang24d.pdf
https://openreview.net/forum?id=jVXJdGQ4eD
MagicPose: Realistic Human Poses and Facial Expressions Retargeting with Identity-aware Diffusion
https://proceedings.mlr.press/v235/chang24d.html
Di Chang, Yichun Shi, Quankai Gao, Hongyi Xu, Jessica Fu, Guoxian Song, Qing Yan, Yizhe Zhu, Xiao Yang, Mohammad Soleymani
https://proceedings.mlr.press/v235/chang24d.html
ICML 2024
In this work, we propose MagicPose, a diffusion-based model for 2D human pose and facial expression retargeting. Specifically, given a reference image, we aim to generate a person’s new images by controlling the poses and facial expressions while keeping the identity unchanged. To this end, we propose a two-stage train...
https://proceedings.mlr.press/v235/chang24e.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/chang24e/chang24e.pdf
https://openreview.net/forum?id=hTiNFCNxM1
From Biased Selective Labels to Pseudo-Labels: An Expectation-Maximization Framework for Learning from Biased Decisions
https://proceedings.mlr.press/v235/chang24e.html
Trenton Chang, Jenna Wiens
https://proceedings.mlr.press/v235/chang24e.html
ICML 2024
Selective labels occur when label observations are subject to a decision-making process; e.g., diagnoses that depend on the administration of laboratory tests. We study a clinically-inspired selective label problem called disparate censorship, where labeling biases vary across subgroups and unlabeled individuals are im...
https://proceedings.mlr.press/v235/chanpuriya24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/chanpuriya24a/chanpuriya24a.pdf
https://openreview.net/forum?id=0XDO74NlOd
On the Role of Edge Dependency in Graph Generative Models
https://proceedings.mlr.press/v235/chanpuriya24a.html
Sudhanshu Chanpuriya, Cameron N Musco, Konstantinos Sotiropoulos, Charalampos Tsourakakis
https://proceedings.mlr.press/v235/chanpuriya24a.html
ICML 2024
We investigate the trade-off between the representation power of graph generative models and model overlap, i.e., the degree to which the model generates diverse outputs versus regurgitating its training data. In particular, we delineate a nested hierarchy of graph generative models categorized into three levels of com...
https://proceedings.mlr.press/v235/chattopadhyay24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/chattopadhyay24a/chattopadhyay24a.pdf
https://openreview.net/forum?id=yTXv8KDD1P
Performance Bounds for Active Binary Testing with Information Maximization
https://proceedings.mlr.press/v235/chattopadhyay24a.html
Aditya Chattopadhyay, Benjamin David Haeffele, Rene Vidal, Donald Geman
https://proceedings.mlr.press/v235/chattopadhyay24a.html
ICML 2024
In many applications like experimental design, group testing, and medical diagnosis, the state of a random variable $Y$ is revealed by successively observing the outcomes of binary tests about $Y$. New tests are selected adaptively based on the history of outcomes observed so far. If the number of states of $Y$ is fini...
https://proceedings.mlr.press/v235/che24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/che24a/che24a.pdf
https://openreview.net/forum?id=R6GT1UDcOW
Target Networks and Over-parameterization Stabilize Off-policy Bootstrapping with Function Approximation
https://proceedings.mlr.press/v235/che24a.html
Fengdi Che, Chenjun Xiao, Jincheng Mei, Bo Dai, Ramki Gummadi, Oscar A Ramirez, Christopher K Harris, A. Rupam Mahmood, Dale Schuurmans
https://proceedings.mlr.press/v235/che24a.html
ICML 2024
We prove that the combination of a target network and over-parameterized linear function approximation establishes a weaker convergence condition for bootstrapped value estimation in certain cases, even with off-policy data. Our condition is naturally satisfied for expected updates over the entire state-action space or...
https://proceedings.mlr.press/v235/chen24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/chen24a/chen24a.pdf
https://openreview.net/forum?id=17ZwoHl65h
PlanDQ: Hierarchical Plan Orchestration via D-Conductor and Q-Performer
https://proceedings.mlr.press/v235/chen24a.html
Chang Chen, Junyeob Baek, Fei Deng, Kenji Kawaguchi, Caglar Gulcehre, Sungjin Ahn
https://proceedings.mlr.press/v235/chen24a.html
ICML 2024
Despite the recent advancements in offline RL, no unified algorithm could achieve superior performance across a broad range of tasks. Offline value function learning, in particular, struggles with sparse-reward, long-horizon tasks due to the difficulty of solving credit assignment and extrapolation errors that accumula...
https://proceedings.mlr.press/v235/chen24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/chen24b/chen24b.pdf
https://openreview.net/forum?id=F3G2udCF3Q
How Interpretable Are Interpretable Graph Neural Networks?
https://proceedings.mlr.press/v235/chen24b.html
Yongqiang Chen, Yatao Bian, Bo Han, James Cheng
https://proceedings.mlr.press/v235/chen24b.html
ICML 2024
Interpretable graph neural networks (XGNNs ) are widely adopted in various scientific applications involving graph-structured data. Existing XGNNs predominantly adopt the attention-based mechanism to learn edge or node importance for extracting and making predictions with the interpretable subgraph. However, the repres...
https://proceedings.mlr.press/v235/chen24c.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/chen24c/chen24c.pdf
https://openreview.net/forum?id=5lI9wm4dws
Doubly Robust Causal Effect Estimation under Networked Interference via Targeted Learning
https://proceedings.mlr.press/v235/chen24c.html
Weilin Chen, Ruichu Cai, Zeqin Yang, Jie Qiao, Yuguang Yan, Zijian Li, Zhifeng Hao
https://proceedings.mlr.press/v235/chen24c.html
ICML 2024
Causal effect estimation under networked interference is an important but challenging problem. Available parametric methods are limited in their model space, while previous semiparametric methods, e.g., leveraging neural networks to fit only one single nuisance function, may still encounter misspecification problems un...
https://proceedings.mlr.press/v235/chen24d.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/chen24d/chen24d.pdf
https://openreview.net/forum?id=J6prHJsIlf
Feature Attribution with Necessity and Sufficiency via Dual-stage Perturbation Test for Causal Explanation
https://proceedings.mlr.press/v235/chen24d.html
Xuexin Chen, Ruichu Cai, Zhengting Huang, Yuxuan Zhu, Julien Horwood, Zhifeng Hao, Zijian Li, José Miguel Hernández-Lobato
https://proceedings.mlr.press/v235/chen24d.html
ICML 2024
We investigate the problem of explainability for machine learning models, focusing on Feature Attribution Methods (FAMs) that evaluate feature importance through perturbation tests. Despite their utility, FAMs struggle to distinguish the contributions of different features, when their prediction changes are similar aft...
https://proceedings.mlr.press/v235/chen24e.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/chen24e/chen24e.pdf
https://openreview.net/forum?id=rADFNrIss3
InstructZero: Efficient Instruction Optimization for Black-Box Large Language Models
https://proceedings.mlr.press/v235/chen24e.html
Lichang Chen, Jiuhai Chen, Tom Goldstein, Heng Huang, Tianyi Zhou
https://proceedings.mlr.press/v235/chen24e.html
ICML 2024
Large language models (LLMs) are instruction followers but the performance varies under different instructions. It is challenging to create the best instruction, especially for black-box LLMs on which backpropagation is forbidden. Instead of directly optimizing the discrete instruction, we optimize a low-dimensional so...
https://proceedings.mlr.press/v235/chen24f.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/chen24f/chen24f.pdf
https://openreview.net/forum?id=61RlaY9EIn
MaSS: Multi-attribute Selective Suppression for Utility-preserving Data Transformation from an Information-theoretic Perspective
https://proceedings.mlr.press/v235/chen24f.html
Yizhuo Chen, Chun-Fu Chen, Hsiang Hsu, Shaohan Hu, Marco Pistoia, Tarek F. Abdelzaher
https://proceedings.mlr.press/v235/chen24f.html
ICML 2024
The growing richness of large-scale datasets has been crucial in driving the rapid advancement and wide adoption of machine learning technologies. The massive collection and usage of data, however, pose an increasing risk for people’s private and sensitive information due to either inadvertent mishandling or malicious ...
https://proceedings.mlr.press/v235/chen24g.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/chen24g/chen24g.pdf
https://openreview.net/forum?id=g9mYBdooPA
Policy-conditioned Environment Models are More Generalizable
https://proceedings.mlr.press/v235/chen24g.html
Ruifeng Chen, Xiong-Hui Chen, Yihao Sun, Siyuan Xiao, Minhui Li, Yang Yu
https://proceedings.mlr.press/v235/chen24g.html
ICML 2024
In reinforcement learning, it is crucial to have an accurate environment dynamics model to evaluate different policies’ value in downstream tasks like offline policy optimization and policy evaluation. However, the learned model is known to be inaccurate in predictions when evaluating target policies different from dat...
https://proceedings.mlr.press/v235/chen24h.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/chen24h/chen24h.pdf
https://openreview.net/forum?id=dbFEFHAD79
MLLM-as-a-Judge: Assessing Multimodal LLM-as-a-Judge with Vision-Language Benchmark
https://proceedings.mlr.press/v235/chen24h.html
Dongping Chen, Ruoxi Chen, Shilin Zhang, Yaochen Wang, Yinuo Liu, Huichi Zhou, Qihui Zhang, Yao Wan, Pan Zhou, Lichao Sun
https://proceedings.mlr.press/v235/chen24h.html
ICML 2024
Multimodal Large Language Models (MLLMs) have gained significant attention recently, showing remarkable potential in artificial general intelligence. However, assessing the utility of MLLMs presents considerable challenges, primarily due to the absence multimodal benchmarks that align with human preferences. Drawing in...
https://proceedings.mlr.press/v235/chen24i.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/chen24i/chen24i.pdf
https://openreview.net/forum?id=4zAHgkiCQg
Premise Order Matters in Reasoning with Large Language Models
https://proceedings.mlr.press/v235/chen24i.html
Xinyun Chen, Ryan Andrew Chi, Xuezhi Wang, Denny Zhou
https://proceedings.mlr.press/v235/chen24i.html
ICML 2024
Large language models (LLMs) have accomplished remarkable reasoning performance in various domains. However, in the domain of reasoning tasks, we discover a frailty: LLMs are surprisingly brittle to the ordering of the premises, despite the fact that such ordering does not alter the underlying task. In particular, we o...
https://proceedings.mlr.press/v235/chen24j.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/chen24j/chen24j.pdf
https://openreview.net/forum?id=O4cHTxW9BS
Self-Play Fine-Tuning Converts Weak Language Models to Strong Language Models
https://proceedings.mlr.press/v235/chen24j.html
Zixiang Chen, Yihe Deng, Huizhuo Yuan, Kaixuan Ji, Quanquan Gu
https://proceedings.mlr.press/v235/chen24j.html
ICML 2024
Harnessing the power of human-annotated data through Supervised Fine-Tuning (SFT) is pivotal for advancing Large Language Models (LLMs). In this paper, we delve into the prospect of growing a strong LLM out of a weak one without the need for acquiring additional human-annotated data. We propose a new fine-tuning method...
https://proceedings.mlr.press/v235/chen24k.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/chen24k/chen24k.pdf
https://openreview.net/forum?id=xaSpuvNYwS
Robust Classification via a Single Diffusion Model
https://proceedings.mlr.press/v235/chen24k.html
Huanran Chen, Yinpeng Dong, Zhengyi Wang, Xiao Yang, Chengqi Duan, Hang Su, Jun Zhu
https://proceedings.mlr.press/v235/chen24k.html
ICML 2024
Diffusion models have been applied to improve adversarial robustness of image classifiers by purifying the adversarial noises or generating realistic data for adversarial training. However, diffusion-based purification can be evaded by stronger adaptive attacks while adversarial training does not perform well under uns...
https://proceedings.mlr.press/v235/chen24l.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/chen24l/chen24l.pdf
https://openreview.net/forum?id=puSMYmHmJW
Relational Learning in Pre-Trained Models: A Theory from Hypergraph Recovery Perspective
https://proceedings.mlr.press/v235/chen24l.html
Yang Chen, Cong Fang, Zhouchen Lin, Bing Liu
https://proceedings.mlr.press/v235/chen24l.html
ICML 2024
Foundation Models (FMs) have demonstrated remarkable insights into the relational dynamics of the world, leading to the crucial question: how do these models acquire an understanding of world hybrid relations? Traditional statistical learning, particularly for prediction problems, may overlook the rich and inherently s...
https://proceedings.mlr.press/v235/chen24m.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/chen24m/chen24m.pdf
https://openreview.net/forum?id=JVhUR8q27o
Towards AutoAI: Optimizing a Machine Learning System with Black-box and Differentiable Components
https://proceedings.mlr.press/v235/chen24m.html
Zhiliang Chen, Chuan-Sheng Foo, Bryan Kian Hsiang Low
https://proceedings.mlr.press/v235/chen24m.html
ICML 2024
Machine learning (ML) models in the real world typically do not exist in isolation. They are usually part of a complex system (e.g., healthcare systems, self-driving cars) containing multiple ML and black-box components. The problem of optimizing such systems, which we refer to as automated AI (AutoAI), requires us to ...
https://proceedings.mlr.press/v235/chen24n.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/chen24n/chen24n.pdf
https://openreview.net/forum?id=UQYXZdca92
Probabilistic Forecasting with Stochastic Interpolants and Föllmer Processes
https://proceedings.mlr.press/v235/chen24n.html
Yifan Chen, Mark Goldstein, Mengjian Hua, Michael Samuel Albergo, Nicholas Matthew Boffi, Eric Vanden-Eijnden
https://proceedings.mlr.press/v235/chen24n.html
ICML 2024
We propose a framework for probabilistic forecasting of dynamical systems based on generative modeling. Given observations of the system state over time, we formulate the forecasting problem as sampling from the conditional distribution of the future system state given its current state. To this end, we leverage the fr...
https://proceedings.mlr.press/v235/chen24o.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/chen24o/chen24o.pdf
https://openreview.net/forum?id=iLSgF7jMtI
CogDPM: Diffusion Probabilistic Models via Cognitive Predictive Coding
https://proceedings.mlr.press/v235/chen24o.html
Kaiyuan Chen, Xingzhuo Guo, Yu Zhang, Jianmin Wang, Mingsheng Long
https://proceedings.mlr.press/v235/chen24o.html
ICML 2024
Predictive Coding (PC) is a theoretical framework in cognitive science suggesting that the human brain processes cognition through spatiotemporal prediction of visual world. Existing studies have developed spatiotemporal prediction neural networks based on the PC theroy, emulating its two core mechanisms: Correcting pr...
https://proceedings.mlr.press/v235/chen24p.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/chen24p/chen24p.pdf
https://openreview.net/forum?id=qIiPM5CbRY
On Interpolating Experts and Multi-Armed Bandits
https://proceedings.mlr.press/v235/chen24p.html
Houshuang Chen, Yuchen He, Chihao Zhang
https://proceedings.mlr.press/v235/chen24p.html
ICML 2024
Learning with expert advice and multi-armed bandit are two classic online decision problems which differ on how the information is observed in each round of the game. We study a family of problems interpolating the two. For a vector $\mathbf{m}=(m_1,…,m_K)\in \mathbb N^K$, an instance of $\mathbf m$-MAB indicates that ...
https://proceedings.mlr.press/v235/chen24q.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/chen24q/chen24q.pdf
https://openreview.net/forum?id=IoUOhnCmlX
Bagged Deep Image Prior for Recovering Images in the Presence of Speckle Noise
https://proceedings.mlr.press/v235/chen24q.html
Xi Chen, Zhewen Hou, Christopher Metzler, Arian Maleki, Shirin Jalali
https://proceedings.mlr.press/v235/chen24q.html
ICML 2024
We investigate both the theoretical and algorithmic aspects of likelihood-based methods for recovering a complex-valued signal from multiple sets of measurements, referred to as looks, affected by speckle (multiplicative) noise. Our theoretical contributions include establishing the first existing theoretical upper bou...
https://proceedings.mlr.press/v235/chen24r.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/chen24r/chen24r.pdf
https://openreview.net/forum?id=LVF4P1NNwO
Exact Conversion of In-Context Learning to Model Weights in Linearized-Attention Transformers
https://proceedings.mlr.press/v235/chen24r.html
Brian K Chen, Tianyang Hu, Hui Jin, Hwee Kuan Lee, Kenji Kawaguchi
https://proceedings.mlr.press/v235/chen24r.html
ICML 2024
In-Context Learning (ICL) has been a powerful emergent property of large language models that has attracted increasing attention in recent years. In contrast to regular gradient-based learning, ICL is highly interpretable and does not require parameter updates. In this paper, we show that, for linearized transformer ne...
https://proceedings.mlr.press/v235/chen24s.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/chen24s/chen24s.pdf
https://openreview.net/forum?id=J16WEPdqhJ
Accelerated Policy Gradient for s-rectangular Robust MDPs with Large State Spaces
https://proceedings.mlr.press/v235/chen24s.html
Ziyi Chen, Heng Huang
https://proceedings.mlr.press/v235/chen24s.html
ICML 2024
Robust Markov decision process (robust MDP) is an important machine learning framework to make a reliable policy that is robust to environmental perturbation. Despite empirical success and popularity of policy gradient methods, existing policy gradient methods require at least iteration complexity $\mathcal{O}(\epsilon...
https://proceedings.mlr.press/v235/chen24t.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/chen24t/chen24t.pdf
https://openreview.net/forum?id=aeXRBnLoPP
Accelerated Policy Gradient: On the Convergence Rates of the Nesterov Momentum for Reinforcement Learning
https://proceedings.mlr.press/v235/chen24t.html
Yen-Ju Chen, Nai-Chieh Huang, Ching-Pei Lee, Ping-Chun Hsieh
https://proceedings.mlr.press/v235/chen24t.html
ICML 2024
Various acceleration approaches for Policy Gradient (PG) have been analyzed within the realm of Reinforcement Learning (RL). However, the theoretical understanding of the widely used momentum-based acceleration method on PG remains largely open. In response to this gap, we adapt the celebrated Nesterov’s accelerated gr...
https://proceedings.mlr.press/v235/chen24u.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/chen24u/chen24u.pdf
https://openreview.net/forum?id=d2vONO90Rw
From Yes-Men to Truth-Tellers: Addressing Sycophancy in Large Language Models with Pinpoint Tuning
https://proceedings.mlr.press/v235/chen24u.html
Wei Chen, Zhen Huang, Liang Xie, Binbin Lin, Houqiang Li, Le Lu, Xinmei Tian, Deng Cai, Yonggang Zhang, Wenxiao Wang, Xu Shen, Jieping Ye
https://proceedings.mlr.press/v235/chen24u.html
ICML 2024
Large Language Models (LLMs) tend to prioritize adherence to user prompts over providing veracious responses, leading to the sycophancy issue. When challenged by users, LLMs tend to admit mistakes and provide inaccurate responses even if they initially provided the correct answer. Recent works propose to employ supervi...
https://proceedings.mlr.press/v235/chen24v.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/chen24v/chen24v.pdf
https://openreview.net/forum?id=x1G7ieRgRd
Improved Communication-Privacy Trade-offs in $L_2$ Mean Estimation under Streaming Differential Privacy
https://proceedings.mlr.press/v235/chen24v.html
Wei-Ning Chen, Berivan Isik, Peter Kairouz, Albert No, Sewoong Oh, Zheng Xu
https://proceedings.mlr.press/v235/chen24v.html
ICML 2024
We study $L_2$ mean estimation under central differential privacy and communication constraints, and address two key challenges: firstly, existing mean estimation schemes that simultaneously handle both constraints are usually optimized for $L_\infty$ geometry and rely on random rotation or Kashin’s representation to a...
https://proceedings.mlr.press/v235/chen24w.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/chen24w/chen24w.pdf
https://openreview.net/forum?id=dqpg8jdA2w
Offline Transition Modeling via Contrastive Energy Learning
https://proceedings.mlr.press/v235/chen24w.html
Ruifeng Chen, Chengxing Jia, Zefang Huang, Tian-Shuo Liu, Xu-Hui Liu, Yang Yu
https://proceedings.mlr.press/v235/chen24w.html
ICML 2024
Learning a high-quality transition model is of great importance for sequential decision-making tasks, especially in offline settings. Nevertheless, the complex behaviors of transition dynamics in real-world environments pose challenges for the standard forward models because of their inductive bias towards smooth regre...
https://proceedings.mlr.press/v235/chen24x.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/chen24x/chen24x.pdf
https://openreview.net/forum?id=a2uFstsHPb
Efficient Pareto Manifold Learning with Low-Rank Structure
https://proceedings.mlr.press/v235/chen24x.html
Weiyu Chen, James Kwok
https://proceedings.mlr.press/v235/chen24x.html
ICML 2024
Multi-task learning, which optimizes performance across multiple tasks, is inherently a multi-objective optimization problem. Various algorithms are developed to provide discrete trade-off solutions on the Pareto front. Recently, continuous Pareto front approximations using a linear combination of base networks have em...
https://proceedings.mlr.press/v235/chen24y.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/chen24y/chen24y.pdf
https://openreview.net/forum?id=aoAPOOtN9E
Toward Adaptive Reasoning in Large Language Models with Thought Rollback
https://proceedings.mlr.press/v235/chen24y.html
Sijia Chen, Baochun Li
https://proceedings.mlr.press/v235/chen24y.html
ICML 2024
Large language models (LLMs) have been routinely used to solve various tasks using step-by-step reasoning. However, the structure of intermediate reasoning steps, or thoughts, is rigid and unidirectional, such as chains, trees, or acyclic-directed graphs. Consequently, the resulting inflexible and forward-only reasonin...
https://proceedings.mlr.press/v235/chen24z.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/chen24z/chen24z.pdf
https://openreview.net/forum?id=JU3xHh1vWw
Identifiability Matters: Revealing the Hidden Recoverable Condition in Unbiased Learning to Rank
https://proceedings.mlr.press/v235/chen24z.html
Mouxiang Chen, Chenghao Liu, Zemin Liu, Zhuo Li, Jianling Sun
https://proceedings.mlr.press/v235/chen24z.html
ICML 2024
Unbiased Learning to Rank (ULTR) aims to train unbiased ranking models from biased click logs, by explicitly modeling a generation process for user behavior and fitting click data based on examination hypothesis. Previous research found empirically that the true latent relevance is mostly recoverable through click fitt...
https://proceedings.mlr.press/v235/chen24aa.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/chen24aa/chen24aa.pdf
https://openreview.net/forum?id=bBzlapzeR1
High-Dimensional Kernel Methods under Covariate Shift: Data-Dependent Implicit Regularization
https://proceedings.mlr.press/v235/chen24aa.html
Yihang Chen, Fanghui Liu, Taiji Suzuki, Volkan Cevher
https://proceedings.mlr.press/v235/chen24aa.html
ICML 2024
This paper studies kernel ridge regression in high dimensions under covariate shifts and analyzes the role of importance re-weighting. We first derive the asymptotic expansion of high dimensional kernels under covariate shifts. By a bias-variance decomposition, we theoretically demonstrate that the re-weighting strateg...
https://proceedings.mlr.press/v235/chen24ab.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/chen24ab/chen24ab.pdf
https://openreview.net/forum?id=0uUHfhXdnH
DiJiang: Efficient Large Language Models through Compact Kernelization
https://proceedings.mlr.press/v235/chen24ab.html
Hanting Chen, Liu Zhicheng, Xutao Wang, Yuchuan Tian, Yunhe Wang
https://proceedings.mlr.press/v235/chen24ab.html
ICML 2024
In an effort to reduce the computational load of Transformers, research on linear attention has gained significant momentum. However, the improvement strategies for attention mechanisms typically necessitate extensive retraining, which is impractical for large language models with a vast array of parameters. In this pa...
https://proceedings.mlr.press/v235/chen24ac.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/chen24ac/chen24ac.pdf
https://openreview.net/forum?id=Y5Zi59N265
GeoMFormer: A General Architecture for Geometric Molecular Representation Learning
https://proceedings.mlr.press/v235/chen24ac.html
Tianlang Chen, Shengjie Luo, Di He, Shuxin Zheng, Tie-Yan Liu, Liwei Wang
https://proceedings.mlr.press/v235/chen24ac.html
ICML 2024
Molecular modeling, a central topic in quantum mechanics, aims to accurately calculate the properties and simulate the behaviors of molecular systems. The molecular model is governed by physical laws, which impose geometric constraints such as invariance and equivariance to coordinate rotation and translation. While nu...
https://proceedings.mlr.press/v235/chen24ad.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/chen24ad/chen24ad.pdf
https://openreview.net/forum?id=v1I4zRAjMb
TENG: Time-Evolving Natural Gradient for Solving PDEs With Deep Neural Nets Toward Machine Precision
https://proceedings.mlr.press/v235/chen24ad.html
Zhuo Chen, Jacob Mccarran, Esteban Vizcaino, Marin Soljacic, Di Luo
https://proceedings.mlr.press/v235/chen24ad.html
ICML 2024
Partial differential equations (PDEs) are instrumental for modeling dynamical systems in science and engineering. The advent of neural networks has initiated a significant shift in tackling these complexities though challenges in accuracy persist, especially for initial value problems. In this paper, we introduce the T...
https://proceedings.mlr.press/v235/chen24ae.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/chen24ae/chen24ae.pdf
https://openreview.net/forum?id=xFk0w9zoV3
EE-LLM: Large-Scale Training and Inference of Early-Exit Large Language Models with 3D Parallelism
https://proceedings.mlr.press/v235/chen24ae.html
Yanxi Chen, Xuchen Pan, Yaliang Li, Bolin Ding, Jingren Zhou
https://proceedings.mlr.press/v235/chen24ae.html
ICML 2024
We present EE-LLM, a framework for large-scale training and inference of early-exit large language models (LLMs). While recent works have shown preliminary evidence for the efficacy of early exiting in accelerating LLM inference, EE-LLM makes a foundational step towards scaling up early-exit LLMs by supporting their tr...
https://proceedings.mlr.press/v235/chen24af.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/chen24af/chen24af.pdf
https://openreview.net/forum?id=AxmefV2NEf
TimeMIL: Advancing Multivariate Time Series Classification via a Time-aware Multiple Instance Learning
https://proceedings.mlr.press/v235/chen24af.html
Xiwen Chen, Peijie Qiu, Wenhui Zhu, Huayu Li, Hao Wang, Aristeidis Sotiras, Yalin Wang, Abolfazl Razi
https://proceedings.mlr.press/v235/chen24af.html
ICML 2024
Deep neural networks, including transformers and convolutional neural networks (CNNs), have significantly improved multivariate time series classification (MTSC). However, these methods often rely on supervised learning, which does not fully account for the sparsity and locality of patterns in time series data (e.g., q...
https://proceedings.mlr.press/v235/chen24ag.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/chen24ag/chen24ag.pdf
https://openreview.net/forum?id=PHUAG63Efe
AegisFL: Efficient and Flexible Privacy-Preserving Byzantine-Robust Cross-silo Federated Learning
https://proceedings.mlr.press/v235/chen24ag.html
Dong Chen, Hongyuan Qu, Guangwu Xu
https://proceedings.mlr.press/v235/chen24ag.html
ICML 2024
Privacy attacks and poisoning attacks are two of the thorniest problems in federation learning (FL). Homomorphic encryption (HE), which allows certain mathematical operations to be done in the ciphertext state, provides a way to solve these two problems simultaneously. However, existing Paillier-based and CKKS-based pr...
https://proceedings.mlr.press/v235/chen24ah.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/chen24ah/chen24ah.pdf
https://openreview.net/forum?id=ffLblkoCw8
MAGDi: Structured Distillation of Multi-Agent Interaction Graphs Improves Reasoning in Smaller Language Models
https://proceedings.mlr.press/v235/chen24ah.html
Justin Chen, Swarnadeep Saha, Elias Stengel-Eskin, Mohit Bansal
https://proceedings.mlr.press/v235/chen24ah.html
ICML 2024
Multi-agent interactions between Large Language Model (LLM) agents have shown major improvements on diverse reasoning tasks. However, these involve long generations from multiple models across several rounds, making them expensive. Moreover, these multi-agent approaches fail to provide a final, single model for efficie...
https://proceedings.mlr.press/v235/chen24ai.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/chen24ai/chen24ai.pdf
https://openreview.net/forum?id=sLZzFTMWSt
CaRiNG: Learning Temporal Causal Representation under Non-Invertible Generation Process
https://proceedings.mlr.press/v235/chen24ai.html
Guangyi Chen, Yifan Shen, Zhenhao Chen, Xiangchen Song, Yuewen Sun, Weiran Yao, Xiao Liu, Kun Zhang
https://proceedings.mlr.press/v235/chen24ai.html
ICML 2024
Identifying the underlying time-delayed latent causal processes in sequential data is vital for grasping temporal dynamics and making downstream reasoning. While some recent methods can robustly identify these latent causal variables, they rely on strict assumptions about the invertible generation process from latent v...
https://proceedings.mlr.press/v235/chen24aj.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/chen24aj/chen24aj.pdf
https://openreview.net/forum?id=d2f2sCXQuI
GRATH: Gradual Self-Truthifying for Large Language Models
https://proceedings.mlr.press/v235/chen24aj.html
Weixin Chen, Dawn Song, Bo Li
https://proceedings.mlr.press/v235/chen24aj.html
ICML 2024
Truthfulness is paramount for large language models (LLMs) as they are increasingly deployed in real-world applications. However, existing LLMs still struggle with generating truthful content, as evidenced by their modest performance on benchmarks like TruthfulQA. To address this issue, we propose GRAdual self-truTHify...
https://proceedings.mlr.press/v235/chen24ak.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/chen24ak/chen24ak.pdf
https://openreview.net/forum?id=QhHMx51ir6
Unleashing the Power of Meta-tuning for Few-shot Generalization Through Sparse Interpolated Experts
https://proceedings.mlr.press/v235/chen24ak.html
Shengzhuang Chen, Jihoon Tack, Yunqiao Yang, Yee Whye Teh, Jonathan Richard Schwarz, Ying Wei
https://proceedings.mlr.press/v235/chen24ak.html
ICML 2024
Recent successes suggest that parameter-efficient fine-tuning of foundation models is becoming the state-of-the-art method for transfer learning in vision, gradually replacing the rich literature of alternatives such as meta-learning. In trying to harness the best of both worlds, meta-tuning introduces a subsequent opt...
https://proceedings.mlr.press/v235/chen24al.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/chen24al/chen24al.pdf
https://openreview.net/forum?id=yHs3jIPgaF
Performative Prediction with Bandit Feedback: Learning through Reparameterization
https://proceedings.mlr.press/v235/chen24al.html
Yatong Chen, Wei Tang, Chien-Ju Ho, Yang Liu
https://proceedings.mlr.press/v235/chen24al.html
ICML 2024
Performative prediction, as introduced by Perdomo et al., is a framework for studying social prediction in which the data distribution itself changes in response to the deployment of a model. Existing work in this field usually hinges on three assumptions that are easily violated in practice: that the performative risk...
https://proceedings.mlr.press/v235/chen24am.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/chen24am/chen24am.pdf
https://openreview.net/forum?id=4RqG4K5UwL
Self-Attention through Kernel-Eigen Pair Sparse Variational Gaussian Processes
https://proceedings.mlr.press/v235/chen24am.html
Yingyi Chen, Qinghua Tao, Francesco Tonin, Johan Suykens
https://proceedings.mlr.press/v235/chen24am.html
ICML 2024
While the great capability of Transformers significantly boosts prediction accuracy, it could also yield overconfident predictions and require calibrated uncertainty estimation, which can be commonly tackled by Gaussian processes (GPs). Existing works apply GPs with symmetric kernels under variational inference to the ...
https://proceedings.mlr.press/v235/chen24an.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/chen24an/chen24an.pdf
https://openreview.net/forum?id=E41gvBG4s6
Recovering Labels from Local Updates in Federated Learning
https://proceedings.mlr.press/v235/chen24an.html
Huancheng Chen, Haris Vikalo
https://proceedings.mlr.press/v235/chen24an.html
ICML 2024
Gradient inversion (GI) attacks present a threat to the privacy of clients in federated learning (FL) by aiming to enable reconstruction of the clients’ data from communicated model updates. A number of such techniques attempts to accelerate data recovery by first reconstructing labels of the samples used in local trai...
https://proceedings.mlr.press/v235/chen24ao.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/chen24ao/chen24ao.pdf
https://openreview.net/forum?id=gjgRKbdYR7
SelfIE: Self-Interpretation of Large Language Model Embeddings
https://proceedings.mlr.press/v235/chen24ao.html
Haozhe Chen, Carl Vondrick, Chengzhi Mao
https://proceedings.mlr.press/v235/chen24ao.html
ICML 2024
How do large language models (LLMs) obtain their answers? The ability to explain and control an LLM’s reasoning process is key for reliability, transparency, and future model developments. We propose SelfIE (Self-Interpretation of Embeddings), a framework that enables LLMs to interpret their own embeddings in natural l...
https://proceedings.mlr.press/v235/chen24ap.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/chen24ap/chen24ap.pdf
https://openreview.net/forum?id=RuH78kOcDi
Locally Differentially Private Decentralized Stochastic Bilevel Optimization with Guaranteed Convergence Accuracy
https://proceedings.mlr.press/v235/chen24ap.html
Ziqin Chen, Yongqiang Wang
https://proceedings.mlr.press/v235/chen24ap.html
ICML 2024
Decentralized bilevel optimization based machine learning techniques are achieving remarkable success in a wide variety of domains. However, the intensive exchange of information (involving nested-loops of consensus or communication iterations) in existing decentralized bilevel optimization algorithms leads to a great ...
https://proceedings.mlr.press/v235/chen24aq.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/chen24aq/chen24aq.pdf
https://openreview.net/forum?id=hQpUhySEJi
Subequivariant Reinforcement Learning in 3D Multi-Entity Physical Environments
https://proceedings.mlr.press/v235/chen24aq.html
Runfa Chen, Ling Wang, Yu Du, Tianrui Xue, Fuchun Sun, Jianwei Zhang, Wenbing Huang
https://proceedings.mlr.press/v235/chen24aq.html
ICML 2024
Learning policies for multi-entity systems in 3D environments is far more complicated against single-entity scenarios, due to the exponential expansion of the global state space as the number of entities increases. One potential solution of alleviating the exponential complexity is dividing the global space into indepe...
https://proceedings.mlr.press/v235/chen24ar.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/chen24ar/chen24ar.pdf
https://openreview.net/forum?id=7sgqXa4aNM
A General Framework for Learning from Weak Supervision
https://proceedings.mlr.press/v235/chen24ar.html
Hao Chen, Jindong Wang, Lei Feng, Xiang Li, Yidong Wang, Xing Xie, Masashi Sugiyama, Rita Singh, Bhiksha Raj
https://proceedings.mlr.press/v235/chen24ar.html
ICML 2024
Weakly supervised learning generally faces challenges in applicability to various scenarios with diverse weak supervision and in scalability due to the complexity of existing algorithms, thereby hindering the practical deployment. This paper introduces a general framework for learning from weak supervision (GLWS) with ...
https://proceedings.mlr.press/v235/chen24as.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/chen24as/chen24as.pdf
https://openreview.net/forum?id=OnidGtOhg3
Diffusion Model-Augmented Behavioral Cloning
https://proceedings.mlr.press/v235/chen24as.html
Shang-Fu Chen, Hsiang-Chun Wang, Ming-Hao Hsu, Chun-Mao Lai, Shao-Hua Sun
https://proceedings.mlr.press/v235/chen24as.html
ICML 2024
Imitation learning addresses the challenge of learning by observing an expert’s demonstrations without access to reward signals from environments. Most existing imitation learning methods that do not require interacting with environments either model the expert distribution as the conditional probability p(a|s) (e.g., ...
https://proceedings.mlr.press/v235/chen24at.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/chen24at/chen24at.pdf
https://openreview.net/forum?id=Rp8R9C0Sth
AutoOS: Make Your OS More Powerful by Exploiting Large Language Models
https://proceedings.mlr.press/v235/chen24at.html
Huilai Chen, Yuanbo Wen, Limin Cheng, Shouxu Kuang, Yumeng Liu, Weijia Li, Ling Li, Rui Zhang, Xinkai Song, Wei Li, Qi Guo, Yunji Chen
https://proceedings.mlr.press/v235/chen24at.html
ICML 2024
With the rapid development of Artificial Intelligence of Things (AIoT), customizing and optimizing operating system (OS) kernel configurations for various AIoT application scenarios is crucial for maximizing system performance. However, existing approaches falter due to the overwhelming problem complexity (i.e., over 1...
https://proceedings.mlr.press/v235/chen24au.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/chen24au/chen24au.pdf
https://openreview.net/forum?id=VOcsmIBiXE
Positional Knowledge is All You Need: Position-induced Transformer (PiT) for Operator Learning
https://proceedings.mlr.press/v235/chen24au.html
Junfeng Chen, Kailiang Wu
https://proceedings.mlr.press/v235/chen24au.html
ICML 2024
Operator learning for Partial Differential Equations (PDEs) is rapidly emerging as a promising approach for surrogate modeling of intricate systems. Transformers with the self-attention mechanism—a powerful tool originally designed for natural language processing—have recently been adapted for operator learning. Howeve...
https://proceedings.mlr.press/v235/chen24av.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/chen24av/chen24av.pdf
https://openreview.net/forum?id=s3e8poX3kb
In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation
https://proceedings.mlr.press/v235/chen24av.html
Shiqi Chen, Miao Xiong, Junteng Liu, Zhengxuan Wu, Teng Xiao, Siyang Gao, Junxian He
https://proceedings.mlr.press/v235/chen24av.html
ICML 2024
Large language models (LLMs) frequently hallucinate, e.g., making factual errors, yet our understanding of why they make these errors remains limited. In this study, we aim to understand the underlying mechanisms of LLM hallucinations from the perspective of inner representations. We discover a pattern associated with ...
https://proceedings.mlr.press/v235/chen24aw.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/chen24aw/chen24aw.pdf
https://openreview.net/forum?id=pQyoBWA146
Split-Ensemble: Efficient OOD-aware Ensemble via Task and Model Splitting
https://proceedings.mlr.press/v235/chen24aw.html
Anthony Chen, Huanrui Yang, Yulu Gan, Denis A Gudovskiy, Zhen Dong, Haofan Wang, Tomoyuki Okuno, Yohei Nakata, Kurt Keutzer, Shanghang Zhang
https://proceedings.mlr.press/v235/chen24aw.html
ICML 2024
Uncertainty estimation is crucial for deep learning models to detect out-of-distribution (OOD) inputs. However, the naive deep learning classifiers produce uncalibrated uncertainty for OOD data. Improving the uncertainty estimation typically requires external data for OOD-aware training or considerable costs to build a...
https://proceedings.mlr.press/v235/chen24ax.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/chen24ax/chen24ax.pdf
https://openreview.net/forum?id=DJdVzxemdA
Deep Demonstration Tracing: Learning Generalizable Imitator Policy for Runtime Imitation from a Single Demonstration
https://proceedings.mlr.press/v235/chen24ax.html
Xiong-Hui Chen, Junyin Ye, Hang Zhao, Yi-Chen Li, Xu-Hui Liu, Haoran Shi, Yu-Yan Xu, Zhihao Ye, Si-Hang Yang, Yang Yu, Anqi Huang, Kai Xu, Zongzhang Zhang
https://proceedings.mlr.press/v235/chen24ax.html
ICML 2024
One-shot imitation learning (OSIL) is to learn an imitator agent that can execute multiple tasks with only a single demonstration. In real-world scenario, the environment is dynamic, e.g., unexpected changes can occur after demonstration. Thus, achieving generalization of the imitator agent is crucial as agents would i...
https://proceedings.mlr.press/v235/chen24ay.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/chen24ay/chen24ay.pdf
https://openreview.net/forum?id=oRLwyayrh1
DRCT: Diffusion Reconstruction Contrastive Training towards Universal Detection of Diffusion Generated Images
https://proceedings.mlr.press/v235/chen24ay.html
Baoying Chen, Jishen Zeng, Jianquan Yang, Rui Yang
https://proceedings.mlr.press/v235/chen24ay.html
ICML 2024
Diffusion models have made significant strides in visual content generation but also raised increasing demands on generated image detection. Existing detection methods have achieved considerable progress, but they usually suffer a significant decline in accuracy when detecting images generated by an unseen diffusion mo...
https://proceedings.mlr.press/v235/chen24az.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/chen24az/chen24az.pdf
https://openreview.net/forum?id=yShA4VPYZB
$\rm E(3)$-Equivariant Actor-Critic Methods for Cooperative Multi-Agent Reinforcement Learning
https://proceedings.mlr.press/v235/chen24az.html
Dingyang Chen, Qi Zhang
https://proceedings.mlr.press/v235/chen24az.html
ICML 2024
Identification and analysis of symmetrical patterns in the natural world have led to significant discoveries across various scientific fields, such as the formulation of gravitational laws in physics and advancements in the study of chemical structures. In this paper, we focus on exploiting Euclidean symmetries inheren...
https://proceedings.mlr.press/v235/chen24ba.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/chen24ba/chen24ba.pdf
https://openreview.net/forum?id=jrHUbftLd6
FedMBridge: Bridgeable Multimodal Federated Learning
https://proceedings.mlr.press/v235/chen24ba.html
Jiayi Chen, Aidong Zhang
https://proceedings.mlr.press/v235/chen24ba.html
ICML 2024
Multimodal Federated Learning (MFL) addresses the setup of multiple clients with diversified modality types (e.g. image, text, video, and audio) working together to improve their local personal models in a data-privacy manner. Prior MFL works rely on restrictive compositional neural architecture designs to ensure inter...
https://proceedings.mlr.press/v235/chen24bb.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/chen24bb/chen24bb.pdf
https://openreview.net/forum?id=rkYOxLLv2x
Revealing the Dark Secrets of Extremely Large Kernel ConvNets on Robustness
https://proceedings.mlr.press/v235/chen24bb.html
Honghao Chen, Yurong Zhang, Xiaokun Feng, Xiangxiang Chu, Kaiqi Huang
https://proceedings.mlr.press/v235/chen24bb.html
ICML 2024
Robustness is a vital aspect to consider when deploying deep learning models into the wild. Numerous studies have been dedicated to the study of the robustness of vision transformers (ViTs), which have dominated as the mainstream backbone choice for vision tasks since the dawn of 2020s. Recently, some large kernel conv...