paper_id uint32 0 3.26k | title stringlengths 15 150 | paper_url stringlengths 42 42 | authors listlengths 1 21 | type stringclasses 3
values | abstract stringlengths 393 2.58k | keywords stringlengths 5 409 | TL;DR stringlengths 7 250 ⌀ | submission_number int64 1 16.4k | arxiv_id stringlengths 10 10 ⌀ | embedding listlengths 768 768 | github stringlengths 26 123 ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|
0 | Implicit Regularization for Tubal Tensor Factorizations via Gradient Descent | https://openreview.net/forum?id=2GmXJnyNM4 | [
"Santhosh Karnik",
"Anna Veselovska",
"Mark Iwen",
"Felix Krahmer"
] | Oral | We provide a rigorous analysis of implicit regularization in an overparametrized tensor factorization problem beyond the lazy training regime. For matrix factorization problems, this phenomenon has been studied in a number of works. A particular challenge has been to design universal initialization strategies which provably lead to implicit regularization in gradient-descent methods. At the same time, it has been argued by Cohen et. al. 2016 that more general classes of neural networks can be captured by considering tensor factorizations. However, in the tensor case, implicit regularization has only been rigorously established for gradient flow or in the lazy training regime. In this paper, we prove the first tensor result of its kind for gradient descent rather than gradient flow. We focus on the tubal tensor product and the associated notion of low tubal rank, encouraged by the relevance of this model for image data. We establish that gradient descent in an overparametrized tensor factorization model with a small random initialization exhibits an implicit bias towards solutions of low tubal rank. Our theoretical findings are illustrated in an extensive set of numerical simulations show-casing the dynamics predicted by our theory as well as the crucial role of using a small random initialization. | overparameterization, implicit regularization, tensor factorization | We provide a rigorous analysis of implicit regularization in an overparametrized tensor factorization problem beyond the lazy training regime. | 16,047 | 2410.16247 | [
-0.019181201234459877,
-0.038270384073257446,
0.01611342281103134,
0.024747205898165703,
0.02661350928246975,
0.048398762941360474,
0.011871577240526676,
0.00596786430105567,
-0.016043782234191895,
-0.05787587910890579,
-0.01238834485411644,
-0.0017976615345105529,
-0.05827592313289642,
0.... | https://github.com/AnnaVeselovskaUA/tubal-tensor-implicit-reg-GD |
1 | Algorithm Development in Neural Networks: Insights from the Streaming Parity Task | https://openreview.net/forum?id=3go0lhfxd0 | [
"Loek van Rossem",
"Andrew M Saxe"
] | Oral | Even when massively overparameterized, deep neural networks show a remarkable ability to generalize. Research on this phenomenon has focused on generalization within distribution, via smooth interpolation. Yet in some settings neural networks also learn to extrapolate to data far beyond the bounds of the original training set, sometimes even allowing for infinite generalization, implying that an algorithm capable of solving the task has been learned. Here we undertake a case study of the learning dynamics of recurrent neural networks trained on the streaming parity task in order to develop an effective theory of algorithm development. The streaming parity task is a simple but nonlinear task defined on sequences up to arbitrary length. We show that, with sufficient finite training experience, RNNs exhibit a phase transition to perfect infinite generalization. Using an effective theory for the representational dynamics, we find an implicit representational merger effect which can be interpreted as the construction of a finite automaton that reproduces the task. Overall, our results disclose one mechanism by which neural networks can generalize infinitely from finite training experience. | Out-of-distribution generalization, Algorithm discovery, Deep learning theory, Mechanistic Interpretability | We explain in a simple setting how out-of-distribution generalization can occur. | 16,013 | 2507.09897 | [
-0.0326198972761631,
-0.014057177118957043,
-0.01832646317780018,
0.03963460400700569,
0.026646170765161514,
0.017422083765268326,
0.04410151392221451,
0.03895360231399536,
-0.05122928321361542,
-0.02704167366027832,
-0.0036526948679238558,
-0.021394891664385796,
-0.0678320974111557,
-0.00... | null |
2 | Orthogonal Subspace Decomposition for Generalizable AI-Generated Image Detection | https://openreview.net/forum?id=GFpjO8S8Po | [
"Zhiyuan Yan",
"Jiangming Wang",
"Peng Jin",
"Ke-Yue Zhang",
"Chengchun Liu",
"Shen Chen",
"Taiping Yao",
"Shouhong Ding",
"Baoyuan Wu",
"Li Yuan"
] | Oral | Detecting AI-generated images (AIGIs), such as natural images or face images, has become increasingly important yet challenging. In this paper, we start from a new perspective to excavate the reason behind the failure generalization in AIGI detection, named the asymmetry phenomenon, where a naively trained detector tends to favor overfitting to the limited and monotonous fake patterns, causing the feature space to become highly constrained and low-ranked, which is proved seriously limiting the expressivity and generalization. One potential remedy is incorporating the pre-trained knowledge within the vision foundation models (higher-ranked) to expand the feature space, alleviating the model's overfitting to fake. To this end, we employ Singular Value Decomposition (SVD) to decompose the original feature space into two orthogonal subspaces. By freezing the principal components and adapting only the remained components, we preserve the pre-trained knowledge while learning fake patterns. Compared to existing full-parameters and LoRA-based tuning methods, we explicitly ensure orthogonality, enabling the higher rank of the whole feature space, effectively minimizing overfitting and enhancing generalization. We finally identify a crucial insight: our method implicitly learns a vital prior that fakes are actually derived from the real, indicating a hierarchical relationship rather than independence. Modeling this prior, we believe, is essential for achieving superior generalization. Our codes are publicly available at https://github.com/YZY-stack/Effort-AIGI-Detection. | AI-Generated Image Detection, Face Forgery Detection, Deepfake Detection, Media Forensics | We introduce a novel approach via orthogonal subspace decomposition for generalizing AI-generated images detection. | 15,222 | 2411.15633 | [
-0.00003362178904353641,
-0.037818778306245804,
0.04007361829280853,
0.027026934549212456,
0.0204058475792408,
0.015415623784065247,
0.03986102715134621,
-0.005431922152638435,
-0.005959442351013422,
-0.05781417712569237,
-0.03166253864765167,
0.014723767526447773,
-0.0881330817937851,
0.0... | https://github.com/YZY-stack/Effort-AIGI-Detection |
3 | Accelerating LLM Inference with Lossless Speculative Decoding Algorithms for Heterogeneous Vocabularies | https://openreview.net/forum?id=vQubr1uBUw | [
"Nadav Timor",
"Jonathan Mamou",
"Daniel Korat",
"Moshe Berchansky",
"Gaurav Jain",
"Oren Pereg",
"Moshe Wasserblat",
"David Harel"
] | Oral | Accelerating the inference of large language models (LLMs) is a critical challenge in generative AI. Speculative decoding (SD) methods offer substantial efficiency gains by generating multiple tokens using a single target forward pass. However, existing SD approaches require the drafter and target models to share the same vocabulary, thus limiting the pool of possible drafters, often necessitating the training of a drafter from scratch. We present three new SD methods that remove this shared-vocabulary constraint. All three methods preserve the target distribution (i.e., they are lossless) and work with off-the-shelf models without requiring additional training or modifications. Empirically, on summarization, programming, and long-context tasks, our algorithms demonstrate significant speedups of up to 2.8x over standard autoregressive decoding. By enabling any off-the-shelf model to serve as a drafter and requiring no retraining, this work substantially broadens the applicability of the SD framework in practice. | Speculative Decoding, Large Language Models, Vocabulary Alignment, Heterogeneous Vocabularies, Efficient Inference, Inference Acceleration, Rejection Sampling, Tokenization, Transformer Architectures, Text Generation, Open Source. | null | 15,148 | 2502.05202 | [
0.0034581993240863085,
-0.026665760204195976,
-0.014417539350688457,
0.04200471565127373,
0.043362341821193695,
0.04652798920869827,
0.00632978230714798,
0.015310577116906643,
-0.005913200788199902,
-0.010434215888381004,
-0.010758771561086178,
0.040639977902173996,
-0.05736065283417702,
0... | https://github.com/keyboardAnt/hf-bench |
4 | LLM-SRBench: A New Benchmark for Scientific Equation Discovery with Large Language Models | https://openreview.net/forum?id=SyQPiZJVWY | [
"Parshin Shojaee",
"Ngoc-Hieu Nguyen",
"Kazem Meidani",
"Amir Barati Farimani",
"Khoa D Doan",
"Chandan K. Reddy"
] | Oral | Scientific equation discovery is a fundamental task in the history of scientific progress, enabling the derivation of laws governing natural phenomena. Recently, Large Language Models (LLMs) have gained interest for this task due to their potential to leverage embedded scientific knowledge for hypothesis generation. However, evaluating the true discovery capabilities of these methods remains challenging, as existing benchmarks often rely on common equations that are susceptible to memorization by LLMs, leading to inflated performance metrics that do not reflect actual discovery. In this paper, we introduce LLM-SRBench, a comprehensive benchmark with 239 challenging problems across four scientific domains specifically designed to evaluate LLM-based scientific equation discovery methods while preventing trivial memorization. Our benchmark comprises two main categories: LSR-Transform, which transforms common physical models into less common mathematical representations to test reasoning beyond memorization, and LSR-Synth, which introduces synthetic, discovery-driven problems requiring data-driven reasoning. Through extensive evaluation of several state-of-the-art methods on LLM-SRBench, using both open and closed LLMs, we find that the best-performing system so far achieves only 31.5% symbolic accuracy.
These findings highlight the challenges of scientific equation discovery, positioning LLM-SRBench as a valuable resource for future research. | Benchmark, Scientific Discovery, Large Language Models, Symbolic Regression | We present LLM-SRBench, the first comprehensive benchmark for evaluating scientific equation discovery with LLMs, designed to rigorously assess discovery capabilities beyond memorization | 14,812 | null | [
-0.042210932821035385,
0.004859969485551119,
0.0037580966018140316,
0.02409457601606846,
0.03766670823097229,
0.01762591302394867,
0.012549391016364098,
-0.00429905578494072,
-0.02228093333542347,
0.005307744722813368,
0.030796995386481285,
0.0336964875459671,
-0.04636078327894211,
0.00487... | https://github.com/deep-symbolic-mathematics/llm-srbench |
5 | ConceptAttention: Diffusion Transformers Learn Highly Interpretable Features | https://openreview.net/forum?id=Rc7y9HFC34 | [
"Alec Helbling",
"Tuna Han Salih Meral",
"Benjamin Hoover",
"Pinar Yanardag",
"Duen Horng Chau"
] | Oral | Do the rich representations of multi-modal diffusion transformers (DiTs) exhibit unique properties that enhance their interpretability? We introduce ConceptAttention, a novel method that leverages the expressive power of DiT attention layers to generate high-quality saliency maps that precisely locate textual concepts within images. Without requiring additional training, ConceptAttention repurposes the parameters of DiT attention layers to produce highly contextualized *concept embeddings*, contributing the major discovery that performing linear projections in the output space of DiT attention layers yields significantly sharper saliency maps compared to commonly used cross-attention maps. ConceptAttention even achieves state-of-the-art performance on zero-shot image segmentation benchmarks, outperforming 15 other zero-shot interpretability methods on the ImageNet-Segmentation dataset. ConceptAttention works for popular image models and even seamlessly generalizes to video generation. Our work contributes the first evidence that the representations of multi-modal DiTs are highly transferable to vision tasks like segmentation. | diffusion, interpretability, transformers, representation learning, mechanistic interpretability | We introduce a method for interpreting the representations of diffusion transformers by producing saliency maps of textual concepts. | 14,767 | 2502.04320 | [
0.011045873165130615,
-0.03143558278679848,
-0.0036811591126024723,
0.06845836341381073,
0.019581660628318787,
0.02596374601125717,
0.029545053839683533,
0.008918728679418564,
-0.002027298789471388,
-0.027645176276564598,
-0.04489611089229584,
-0.02071050927042961,
-0.02666446939110756,
0.... | https://github.com/helblazer811/ConceptAttention |
6 | Emergence in non-neural models: grokking modular arithmetic via average gradient outer product | https://openreview.net/forum?id=36hVB7DEB0 | [
"Neil Rohit Mallinar",
"Daniel Beaglehole",
"Libin Zhu",
"Adityanarayanan Radhakrishnan",
"Parthe Pandit",
"Mikhail Belkin"
] | Oral | Neural networks trained to solve modular arithmetic tasks exhibit grokking, a phenomenon where the test accuracy starts improving long after the model achieves 100% training accuracy in the training process. It is often taken as an example of "emergence", where model ability manifests sharply through a phase transition. In this work, we show that the phenomenon of grokking is not specific to neural networks nor to gradient descent-based optimization. Specifically, we show that this phenomenon occurs when learning modular arithmetic with Recursive Feature Machines (RFM), an iterative algorithm that uses the Average Gradient Outer Product (AGOP) to enable task-specific feature learning with general machine learning models. When used in conjunction with kernel machines, iterating RFM results in a fast transition from random, near zero, test accuracy to perfect test accuracy. This transition cannot be predicted from the training loss, which is identically zero, nor from the test loss, which remains constant in initial iterations. Instead, as we show, the transition is completely determined by feature learning: RFM gradually learns block-circulant features to solve modular arithmetic. Paralleling the results for RFM, we show that neural networks that solve modular arithmetic also learn block-circulant features. Furthermore, we present theoretical evidence that RFM uses such block-circulant features to implement the Fourier Multiplication Algorithm, which prior work posited as the generalizing solution neural networks learn on these tasks. Our results demonstrate that emergence can result purely from learning task-relevant features and is not specific to neural architectures nor gradient descent-based optimization methods. Furthermore, our work provides more evidence for AGOP as a key mechanism for feature learning in neural networks. | Theory of deep learning, grokking, modular arithmetic, feature learning, kernel methods, average gradient outer product (AGOP), emergence | We show that "emergence" in the task of grokking modular arithmetic occurs in feature learning kernels using the Average Gradient Outer Product (AGOP) and that the features take the form of block-circulant features. | 14,743 | 2407.20199 | [
-0.021082431077957153,
-0.012389615178108215,
0.004305555485188961,
0.022336209192872047,
0.05508352071046829,
0.025000594556331635,
0.02121897228062153,
0.004627522546797991,
-0.06073524057865143,
-0.01705952361226082,
0.010001549497246742,
0.017953528091311455,
-0.057553987950086594,
0.0... | https://github.com/nmallinar/rfm-grokking |
7 | Hierarchical Refinement: Optimal Transport to Infinity and Beyond | https://openreview.net/forum?id=EBNgREMoVD | [
"Peter Halmos",
"Julian Gold",
"Xinhao Liu",
"Benjamin Raphael"
] | Oral | Optimal transport (OT) has enjoyed great success in machine learning as a principled way to align datasets via a least-cost correspondence, driven in large part by the runtime efficiency of the Sinkhorn algorithm (Cuturi, 2013). However, Sinkhorn has quadratic space complexity in the number of points, limiting scalability to larger datasets. Low-rank OT achieves linear-space complexity, but by definition, cannot compute a one-to-one correspondence between points. When the optimal transport problem is an assignment problem between datasets then an optimal mapping, known as the _Monge map_, is guaranteed to be a bijection. In this setting, we show that the factors of an optimal low-rank coupling co-cluster each point with its image under the Monge map. We leverage this invariant to derive an algorithm,
_Hierarchical Refinement_ (`HiRef`), that dynamically constructs a multiscale partition of each dataset using low-rank OT subproblems, culminating in a bijective coupling. Hierarchical Refinement uses linear space and has log-linear runtime, retaining the space advantage of low-rank OT while overcoming its limited resolution. We demonstrate the advantages of Hierarchical Refinement on several datasets, including ones containing over a million points, scaling full-rank OT to problems previously beyond Sinkhorn's reach. | Optimal transport, low-rank, linear complexity, sparse, full-rank | Linear-complexity optimal transport, using low-rank optimal transport to progressively refine the solution to a Monge map. | 14,649 | 2503.03025 | [
-0.027334941551089287,
-0.01004324946552515,
0.011843880638480186,
0.034024015069007874,
0.029421843588352203,
0.02161589451134205,
0.016120072454214096,
-0.018252994865179062,
-0.00789616722613573,
-0.0722673311829567,
-0.008323721587657928,
-0.020554102957248688,
-0.07072562724351883,
0.... | https://github.com/raphael-group/HiRef |
8 | Train for the Worst, Plan for the Best: Understanding Token Ordering in Masked Diffusions | https://openreview.net/forum?id=DjJmre5IkP | [
"Jaeyeon Kim",
"Kulin Shah",
"Vasilis Kontonis",
"Sham M. Kakade",
"Sitan Chen"
] | Oral | In recent years, masked diffusion models (MDMs) have emerged as a promising alternative approach for generative modeling over discrete domains. Compared to autoregressive models (ARMs), MDMs trade off complexity at training time with flexibility at inference time. At training time, they must learn to solve an exponentially large number of infilling problems, but at inference time, they can decode tokens in essentially arbitrary order. In this work we closely examine these two competing effects. On the training front, we theoretically and empirically demonstrate that MDMs indeed train on computationally intractable subproblems compared to their autoregressive counterparts. On the inference front, we show that a suitable strategy for adaptively choosing the token decoding order significantly enhances the capabilities of MDMs, allowing them to sidestep hard subproblems. On logic puzzles like Sudoku, we show that adaptive inference can boost solving accuracy in pretrained MDMs from $<7$\% to $\approx 90$\%, even outperforming ARMs that were explicitly trained via teacher forcing to learn the right order of decoding. | Discrete Diffusion models, Masked Diffusion Models, Diffusion Models, Learning Theory, Inference-Time Strategy | null | 14,095 | 2502.06768 | [
-0.03017411381006241,
-0.015793848782777786,
-0.03172555938363075,
0.06325984746217728,
0.05338553711771965,
0.03736850991845131,
0.02657005749642849,
-0.011509117670357227,
-0.01983768306672573,
-0.034276317805051804,
0.0005438215448521078,
0.0016898621106520295,
-0.03530674800276756,
0.0... | null |
9 | Statistical Test for Feature Selection Pipelines by Selective Inference | https://openreview.net/forum?id=4EYwwVuhtG | [
"Tomohiro Shiraishi",
"Tatsuya Matsukawa",
"Shuichi Nishino",
"Ichiro Takeuchi"
] | Oral | A data analysis pipeline is a structured sequence of steps that transforms raw data into meaningful insights by integrating various analysis algorithms. In this paper, we propose a novel statistical test to assess the significance of data analysis pipelines. Our approach enables the systematic development of valid statistical tests applicable to any feature selection pipeline composed of predefined components. We develop this framework based on selective inference, a statistical technique that has recently gained attention for data-driven hypotheses. As a proof of concept, we focus on feature selection pipelines for linear models, composed of three missing value imputation algorithms, three outlier detection algorithms, and three feature selection algorithms. We theoretically prove that our statistical test can control the probability of false positive feature selection at any desired level, and demonstrate its validity and effectiveness through experiments on synthetic and real data. Additionally, we present an implementation framework that facilitates testing across any configuration of these feature selection pipelines without extra implementation costs. | Data Analysis Pipeline, AutoML, Statistical Test, Selective Inference, Missing Value Imputation, Outlier Detection, Feature Selection | We introduce a statistical test for data analysis pipeline in feature selection problems, which allows for the systematic development of valid statistical tests applicable to any pipeline configuration composed of a set of predefined components. | 13,925 | 2406.18902 | [
0.009413988329470158,
-0.01155618391931057,
-0.02200639247894287,
0.030097780749201775,
0.067814402282238,
0.045050036162137985,
0.06003964692354202,
-0.025692462921142578,
-0.0019961593206971884,
-0.038121238350868225,
-0.011850292794406414,
0.0272664912045002,
-0.06766638159751892,
0.000... | https://github.com/Takeuchi-Lab-SI-Group/si4pipeline |
10 | All-Purpose Mean Estimation over R: Optimal Sub-Gaussianity with Outlier Robustness and Low Moments Performance | https://openreview.net/forum?id=qR7YsQdFxV | [
"Jasper C.H. Lee",
"Walter McKelvie",
"Maoyuan Song",
"Paul Valiant"
] | Oral | We consider the basic statistical challenge of designing an "all-purpose" mean estimation algorithm that is recommendable across a variety of settings and models.
Recent work by [Lee and Valiant 2022] introduced the first 1-d mean estimator whose error in the standard finite-variance+i.i.d. setting is optimal even in its constant factors; experimental demonstration of its good performance was shown by [Gobet et al. 2022].
Yet, unlike for classic (but not necessarily practical) estimators such as median-of-means and trimmed mean, this new algorithm lacked proven robustness guarantees in other settings, including the settings of adversarial data corruption and heavy-tailed distributions with infinite variance.
Such robustness is important for practical use cases.
This raises a research question: is it possible to have a mean estimator that is robust, *without* sacrificing provably optimal performance in the standard i.i.d. setting?
In this work, we show that Lee and Valiant's estimator is in fact an "all-purpose" mean estimator by proving:
(A) It is robust to an $\eta$-fraction of data corruption, even in the strong contamination model; it has optimal estimation error $O(\sigma\sqrt{\eta})$ for distributions with variance $\sigma^2$.
(B) For distributions with finite $z^\text{th}$ moment, for $z \in (1,2)$, it has optimal estimation error, matching the lower bounds of [Devroye et al. 2016] up to constants.
We further show (C) that outlier robustness for 1-d mean estimators in fact implies neighborhood optimality, a notion of beyond worst-case and distribution-dependent optimality recently introduced by [Dang et al. 2023].
Previously, such an optimality guarantee was only known for median-of-means, but now it holds also for all estimators that are simultaneously *robust* and *sub-Gaussian*, including Lee and Valiant's, resolving a question raised by Dang et al.
Lastly, we show (D) the asymptotic normality and efficiency of Lee and Valiant's estimator, as further evidence for its performance across many settings. | mean estimation, instance optimality, robust statistics | null | 13,863 | null | [
-0.00802331231534481,
0.014707943424582481,
0.03284085914492607,
-0.002028175862506032,
0.03645622730255127,
0.03510919585824013,
0.05296678468585014,
-0.0028722728602588177,
-0.031114203855395317,
-0.043116677552461624,
0.008942199870944023,
-0.018073150888085365,
-0.061460230499506,
-0.0... | null |
11 | SWE-Lancer: Can Frontier LLMs Earn $1 Million from Real-World Freelance Software Engineering? | https://openreview.net/forum?id=xZXhFg43EI | [
"Samuel Miserendino",
"Michele Wang",
"Tejal Patwardhan",
"Johannes Heidecke"
] | Oral | We introduce SWE-Lancer, a benchmark of over 1400 freelance software engineering tasks from Upwork, valued at \\\$1 million USD total in real-world payouts. SWE-Lancer encompasses both independent engineering tasks — ranging from \\\$50 bug fixes to \\\$32000 feature implementations — and managerial tasks, where models choose between technical implementation proposals. Independent tasks are graded with end-to-end tests triple-verified by experienced software engineers, while managerial decisions are assessed against the choices of the original hired engineering managers. We evaluate model performance and find that frontier models are still unable to solve the majority of tasks. To facilitate future research, we open-source a unified Docker image and a public evaluation split. By mapping model performance to monetary value, we hope SWE-Lancer enables greater research into the economic impact of AI model development. | software engineering, benchmark, evals, evaluations, dataset, tasks, real-world, swe, coding, delegation, agents, language models, full-stack engineering | We introduce SWE-Lancer, a benchmark of over 1500 real-world full-stack engineering tasks from Upwork, worth $1 million USD in payouts made to freelance software engineers. | 13,597 | null | [
0.0029941946268081665,
-0.03447999432682991,
-0.015928905457258224,
0.05767872557044029,
0.05825301632285118,
0.0072783213108778,
0.035821590572595596,
0.045332059264183044,
-0.017995379865169525,
-0.02635982446372509,
-0.00825938768684864,
0.016640881076455116,
-0.06718537956476212,
-0.01... | https://github.com/openai/SWELancer-Benchmark |
12 | Training a Generally Curious Agent | https://openreview.net/forum?id=UeB3Hdrhda | [
"Fahim Tajwar",
"Yiding Jiang",
"Abitha Thankaraj",
"Sumaita Sadia Rahman",
"J Zico Kolter",
"Jeff Schneider",
"Russ Salakhutdinov"
] | Oral | Efficient exploration is essential for intelligent systems interacting with their environment, but existing language models often fall short in scenarios that require strategic information gathering. In this paper, we present **Paprika**, a fine-tuning approach that enables language models to develop general decision-making capabilities that are not confined to particular environments. By training on synthetic interaction data from different tasks that require diverse strategies, Paprika teaches models to explore and adapt their behavior on a new task based on environment feedback in-context without more gradient updates. Experimental results show that models fine-tuned with Paprika can effectively transfer their learned decision-making capabilities to entirely unseen tasks without additional training. Unlike traditional training, our approach's primary bottleneck lies in sampling useful interaction data instead of model updates. To improve sample efficiency, we propose a curriculum learning strategy that prioritizes sampling trajectories from tasks with high learning potential. These results suggest a promising path towards AI systems that can autonomously solve novel sequential decision-making problems that require interactions with the external world. | LLM Agent, Synethic Data, Multiturn finetuning | Method for training on synthetic data to improve LLMs' sequential decision making capabilities | 13,556 | 2502.17543 | [
-0.028104690834879875,
-0.04003501310944557,
-0.027146564796566963,
0.07006682455539703,
0.03761666640639305,
0.014136054553091526,
0.03961124271154404,
0.017447875812649727,
-0.005066996905952692,
-0.03331589326262474,
-0.03607496991753578,
0.046626731753349304,
-0.06950157135725021,
-0.0... | https://github.com/tajwarfahim/paprika |
13 | High-Dimensional Prediction for Sequential Decision Making | https://openreview.net/forum?id=uRAgIVnAO6 | [
"Georgy Noarov",
"Ramya Ramalingam",
"Aaron Roth",
"Stephan Xie"
] | Oral | We give an efficient algorithm for producing multi-dimensional forecasts in an online adversarial environment that have low bias subject to any polynomial number of conditioning events, that can depend both on external context and on our predictions themselves. We demonstrate the use of this algorithm with several applications. We show how to make predictions that can be transparently consumed by any polynomial number of downstream decision makers with different utility functions, guaranteeing them diminishing swap regret at optimal rates. We also give the first efficient algorithms for guaranteeing diminishing conditional regret in online combinatorial optimization problems for an arbitrary polynomial number of conditioning events --- i.e. on an arbitrary number of intersecting subsequences determined both by context and our own predictions. Finally, we give the first efficient algorithm for online multicalibration with $O(T^{2/3})$ rates in the ECE metric. | online decision making, combinatorial optimization, multicalibration, calibration, swap regret, no-regret, conditional guarantees | A general framework for online adversarial vector-valued prediction with decision making applications | 13,513 | 2310.17651 | [
-0.05363934487104416,
-0.0182009506970644,
-0.009881662204861641,
0.047327443957328796,
0.025602882727980614,
0.043326638638973236,
0.016372084617614746,
0.004297033883631229,
-0.010872195474803448,
-0.03978506848216057,
-0.01545311976224184,
0.02295861579477787,
-0.06837444007396698,
-0.0... | null |
14 | EmbodiedBench: Comprehensive Benchmarking Multi-modal Large Language Models for Vision-Driven Embodied Agents | https://openreview.net/forum?id=DgGF2LEBPS | [
"Rui Yang",
"Hanyang Chen",
"Junyu Zhang",
"Mark Zhao",
"Cheng Qian",
"Kangrui Wang",
"Qineng Wang",
"Teja Venkat Koripella",
"Marziyeh Movahedi",
"Manling Li",
"Heng Ji",
"Huan Zhang",
"Tong Zhang"
] | Oral | Leveraging Multi-modal Large Language Models (MLLMs) to create embodied agents offers a promising avenue for tackling real-world tasks. While language-centric embodied agents have garnered substantial attention, MLLM-based embodied agents remain underexplored due to the lack of comprehensive evaluation frameworks. To bridge this gap, we introduce EmbodiedBench, an extensive benchmark designed to evaluate vision-driven embodied agents.
EmbodiedBench features: (1) a diverse set of 1,128 testing tasks across four environments, ranging from high-level semantic tasks (e.g., household) to low-level tasks involving atomic actions (e.g., navigation and manipulation); and (2) six meticulously curated subsets evaluating essential agent capabilities like commonsense reasoning, complex instruction understanding, spatial awareness, visual perception, and long-term planning.
Through extensive experiments, we evaluated 24 leading proprietary and open-source MLLMs within EmbodiedBench. Our findings reveal that: MLLMs excel at high-level tasks but struggle with low-level manipulation, with the best model, GPT-4o, scoring only $28.9\\%$ on average. EmbodiedBench provides a multifaceted standardized evaluation platform that not only highlights existing challenges but also offers valuable insights to advance MLLM-based embodied agents. Our code and dataset are available at [https://embodiedbench.github.io](https://embodiedbench.github.io). | Embodied Agent, Multi-modal Large Language Models | We introduces EmbodiedBench, a benchmark designed to evaluate the finegrained capabilities of vision-driven embodied agents. | 13,392 | 2502.09560 | [
0.008102494291961193,
-0.031402576714754105,
-0.004811062943190336,
0.01644778996706009,
0.01479186862707138,
0.005239791236817837,
0.024295896291732788,
0.013423384167253971,
-0.027022909373044968,
-0.03032533824443817,
-0.008588694967329502,
0.010017422027885914,
-0.056606318801641464,
-... | https://github.com/EmbodiedBench/EmbodiedBench |
15 | Auditing $f$-differential privacy in one run | https://openreview.net/forum?id=OZSXYeqpI1 | [
"Saeed Mahloujifar",
"Luca Melis",
"Kamalika Chaudhuri"
] | Oral | Empirical auditing has emerged as a means of catching some of the flaws in the implementation of privacy-preserving algorithms. Existing auditing mechanisms, however, are either computationally inefficient -- requiring multiple runs of the machine learning algorithms —- or suboptimal in calculating an empirical privacy. In this work, we present a tight and efficient auditing procedure and analysis that can effectively assess the privacy of mechanisms. Our approach is efficient; Similar to the recent work of Steinke, Nasr and Jagielski (2023), our auditing procedure leverages the randomness of examples in the input dataset and requires only a single run of the target mechanism. And it is more accurate; we provide a novel analysis that enables us to achieve tight empirical privacy estimates by using the hypothesized $f$-DP curve of the mechanism, which provides a more accurate measure of privacy than the traditional $\epsilon,\delta$ differential privacy parameters. We use our auditing procure and analysis to obtain empirical privacy, demonstrating that our auditing procedure delivers tighter privacy estimates. | Empirical privacy, Auditing, Differential Privacy | We perform tests to audit whether a privacy mechanism satisfies $f$-differential privacy. We only invoke the privacy mechanism once. | 13,343 | null | [
-0.0005898004164919257,
0.0049765040166676044,
0.008300714194774628,
0.0568443238735199,
0.05580795928835869,
0.014809120446443558,
0.03369252383708954,
-0.042706333100795746,
-0.005942849908024073,
-0.02082650363445282,
0.0243896022439003,
-0.026539292186498642,
-0.047694411128759384,
-0.... | null |
16 | Learning with Expected Signatures: Theory and Applications | https://openreview.net/forum?id=yDTwamN4LQ | [
"Lorenzo Lucchese",
"Mikko S. Pakkanen",
"Almut E. D. Veraart"
] | Oral | The expected signature maps a collection of data streams to a lower dimensional representation, with a remarkable property: the resulting feature tensor can fully characterize the data generating distribution. This "model-free"' embedding has been successfully leveraged to build multiple domain-agnostic machine learning (ML) algorithms for time series and sequential data. The convergence results proved in this paper bridge the gap between the expected signature's empirical discrete-time estimator and its theoretical continuous-time value, allowing for a more complete probabilistic interpretation of expected signature-based ML methods. Moreover, when the data generating process is a martingale, we suggest a simple modification of the expected signature estimator with significantly lower mean squared error and empirically demonstrate how it can be effectively applied to improve predictive performance. | Probabilistic Machine Learning, Signature, Expected Signature, Time Series, Rough Paths | null | 13,314 | 2505.20465 | [
-0.0007773066172376275,
-0.028051353991031647,
0.03080562688410282,
0.037091247737407684,
0.059443484991788864,
0.02375682070851326,
0.01625978574156761,
-0.020463086664676666,
-0.03334105387330055,
-0.029408017173409462,
0.03140993416309357,
0.01726703532040119,
-0.0677134096622467,
0.010... | https://github.com/lorenzolucchese/esig |
17 | Statistical Query Hardness of Multiclass Linear Classification with Random Classification Noise | https://openreview.net/forum?id=EZV4edMGM1 | [
"Ilias Diakonikolas",
"Mingchen Ma",
"Lisheng Ren",
"Christos Tzamos"
] | Oral | We study the task of Multiclass Linear Classification (MLC)
in the distribution-free PAC model
with Random Classification Noise (RCN).
Specifically, the learner is given a set of
labeled examples $(x, y)$, where $x$ is drawn
from an unknown distribution on $R^d$
and the labels are generated by a
multiclass linear classifier corrupted with RCN.
That is, the label $y$ is flipped from $i$ to $j$
with probability $H_{ij}$
according to a known noise matrix $H$ with
non-negative separation
$\sigma: = \min_{i \neq j} H_{ii}-H_{ij}$.
The goal is to compute a hypothesis with
small 0-1 error. For the special case of two labels,
prior work has given polynomial-time algorithms
achieving the optimal error.
Surprisingly, little is known about
the complexity of this task even for three labels.
As our main contribution, we show that the complexity
of MLC with RCN becomes drastically different
in the presence of three or more labels.
Specifically, we prove super-polynomial
Statistical Query (SQ) lower bounds for this problem.
In more detail, even for three labels and
constant separation,
we give a super-polynomial lower bound
on the complexity of any SQ algorithm achieving optimal error.
For a larger number of labels and smaller separation,
we show a super-polynomial SQ lower bound even
for the weaker goal of achieving any constant factor approximation to the optimal loss or even beating the trivial hypothesis. | Multiclass Linear Classification, Random Classification Noise, Statistical Query Learning | We prove the first sets of SQ lower bounds for multiclass linear classification under random classification noise. | 13,262 | 2502.11413 | [
-0.0143427150323987,
-0.0011680200695991516,
-0.009333953261375427,
0.04942682385444641,
0.02539379522204399,
0.021230213344097137,
0.01417323388159275,
-0.01944510079920292,
-0.03157773241400719,
-0.022118523716926575,
-0.03745904564857483,
-0.001103476039133966,
-0.06367432326078415,
0.0... | null |
18 | Expected Variational Inequalities | https://openreview.net/forum?id=LCbHsdtvOR | [
"Brian Hu Zhang",
"Ioannis Anagnostides",
"Emanuel Tewolde",
"Ratip Emin Berker",
"Gabriele Farina",
"Vincent Conitzer",
"Tuomas Sandholm"
] | Oral | *Variational inequalities (VIs)* encompass many fundamental problems in diverse areas ranging from engineering to economics and machine learning. However, their considerable expressivity comes at the cost of computational intractability. In this paper, we introduce and analyze a natural relaxation—which we refer to as *expected variational inequalities (EVIs)*—where the goal is to find a distribution that satisfies the VI constraint in expectation. By adapting recent techniques from game theory, we show that, unlike VIs, EVIs can be solved in polynomial time under general (nonmonotone) operators. EVIs capture the seminal notion of correlated equilibria, but enjoy a greater reach beyond games. We also employ our framework to capture and generalize several existing disparate results, including from settings such as smooth games, and games with coupled constraints or nonconcave utilities. | variational inequalities, correlated equilibria, game theory | We introduce a computationally tractable relaxation of variational inequalities. | 13,159 | 2502.18605 | [
-0.03574097901582718,
0.02103162556886673,
-0.004538395907729864,
0.06039280444383621,
0.022891594097018242,
0.056309137493371964,
0.014718127436935902,
0.009795711375772953,
-0.03980286419391632,
-0.03676144406199455,
-0.03281750530004501,
-0.00036916634417138994,
-0.087186299264431,
-0.0... | null |
19 | Improving the Scaling Laws of Synthetic Data with Deliberate Practice | https://openreview.net/forum?id=0LZRtvK871 | [
"Reyhane Askari-Hemmat",
"Mohammad Pezeshki",
"Elvis Dohmatob",
"Florian Bordes",
"Pietro Astolfi",
"Melissa Hall",
"Jakob Verbeek",
"Michal Drozdzal",
"Adriana Romero-Soriano"
] | Oral | Inspired by the principle of deliberate practice in human learning, we propose Deliberate Practice for Synthetic Data Generation (DP), a novel framework that improves sample efficiency through dynamic synthetic data generation. Prior work has shown that scaling synthetic data is inherently challenging, as naively adding new data leads to diminishing returns. To address this, pruning has been identified as a key mechanism for improving scaling, enabling models to focus on the most informative synthetic samples. Rather than generating a large dataset and pruning it afterward, DP efficiently approximates the direct generation of informative samples. We theoretically show how training on challenging, informative examples improves scaling laws and empirically validate that DP achieves better scaling performance with significantly fewer training samples and iterations. On ImageNet-100, DP generates 3.4x fewer samples and requires six times fewer iterations, while on ImageNet-1k, it generates 8x fewer samples with a 30% reduction in iterations, all while achieving superior performance compared to prior work. | Synthetic Data, Deliberate Practice, Active Learning, Sample Efficiency, Scaling Laws, Data Curation, Diffusion Models, Dataset Pruning | Deliberate Practice for Synthetic Data Generation (DP) dynamically generates informative samples to improve scaling efficiency, reducing sample requirements and training iterations while achieving superior performance. | 13,125 | 2502.15588 | [
0.016795460134744644,
-0.053353745490312576,
-0.02776230126619339,
0.03093685396015644,
0.05777501314878464,
0.02796720713376999,
0.02489941008388996,
-0.021697156131267548,
-0.032354019582271576,
-0.03472407907247543,
-0.00979627761989832,
-0.025996461510658264,
-0.07226517796516418,
0.00... | null |
20 | Scaling Collapse Reveals Universal Dynamics in Compute-Optimally Trained Neural Networks | https://openreview.net/forum?id=Fvq9ogLnLN | [
"Shikai Qiu",
"Lechao Xiao",
"Andrew Gordon Wilson",
"Jeffrey Pennington",
"Atish Agarwala"
] | Oral | Understanding neural network training dynamics at scale is an important open problem. Although realistic model architectures, optimizers, and data interact in complex ways that make predictive theory challenging, we show that compute-optimally trained models exhibit remarkably precise collective regularities. Specifically, loss curves from models of varying sizes collapse onto a single universal curve when training compute and loss are normalized to unity at the end of training. With learning rate decay, discrepancies between normalized curves fall below the noise floor of individual models' loss curves across random seeds, yielding an exceptionally tight collapse we term "supercollapse." We observe supercollapse across learning rate schedules, datasets, and architectures, including transformers trained on next-token prediction. This collapse breaks down when hyperparameters are scaled suboptimally, providing a practical indicator of proper scaling. We explain these phenomena by connecting collapse to the power-law structure in typical neural scaling laws, and analyzing a simple but effective model of SGD noise dynamics that accurately captures how learning rate schedules deform loss curves away from power laws while preserving universality, and why learning rate decay suppresses variance to enable supercollapse. | Scaling Laws, Optimization | Loss curves from compute-optimally trained models collapse onto a universal shape, from which we can derive both theoretical insights and practical diagnostics for scaling. | 13,037 | 2507.02119 | [
-0.049824509769678116,
-0.014151462353765965,
0.01629544235765934,
0.03716396167874336,
0.04012789577245712,
0.02790668047964573,
0.013468660414218903,
0.005660598166286945,
-0.05387628450989723,
-0.017211025580763817,
-0.0017867827555164695,
-0.017190899699926376,
-0.05675266683101654,
0.... | https://github.com/shikaiqiu/supercollapse |
21 | Cross-environment Cooperation Enables Zero-shot Multi-agent Coordination | https://openreview.net/forum?id=zBBYsVGKuB | [
"Kunal Jha",
"Wilka Carvalho",
"Yancheng Liang",
"Simon Shaolei Du",
"Max Kleiman-Weiner",
"Natasha Jaques"
] | Oral | Zero-shot coordination (ZSC), the ability to adapt to a new partner in a cooperative task, is a critical component of human-compatible AI. While prior work has focused on training agents to cooperate on a single task, these specialized models do not generalize to new tasks, even if they are highly similar. Here, we study how reinforcement learning on a **distribution of environments with a single partner** enables learning general cooperative skills that support ZSC with **many new partners on many new problems**. We introduce *two* Jax-based, procedural generators that create billions of solvable coordination challenges. We develop a new paradigm called **Cross-Environment Cooperation (CEC)**, and show that it outperforms competitive baselines quantitatively and qualitatively when collaborating with real people. Our findings suggest that learning to collaborate across many unique scenarios encourages agents to develop general norms, which prove effective for collaboration with different partners. Together, our results suggest a new route toward designing generalist cooperative agents capable of interacting with humans without requiring human data. | Zero-shot Coordination, Human-AI Collaboration, Multi-agent Interactions | Learning to generalize to novel coordination tasks with one partner lets you generalize to novel partners as well | 12,934 | 2504.12714 | [
-0.021278109401464462,
-0.017738960683345795,
-0.021609904244542122,
0.05005832761526108,
0.052537646144628525,
-0.0005299110198393464,
0.02377132698893547,
0.024538880214095116,
-0.023794569075107574,
-0.040716733783483505,
-0.03664292395114899,
-0.007514335680752993,
-0.07557303458452225,
... | https://github.com/KJha02/crossEnvCooperation |
22 | Layer by Layer: Uncovering Hidden Representations in Language Models | https://openreview.net/forum?id=WGXb7UdvTX | [
"Oscar Skean",
"Md Rifat Arefin",
"Dan Zhao",
"Niket Nikul Patel",
"Jalal Naghiyev",
"Yann LeCun",
"Ravid Shwartz-Ziv"
] | Oral | From extracting features to generating text, the outputs of large language models (LLMs) typically rely on their final layers, following the conventional wisdom that earlier layers capture only low-level cues. However, our analysis shows that intermediate layers can encode even richer representations, often improving performance on a wide range of downstream tasks. To explain and quantify these hidden-layer properties, we propose a unified framework of representation quality metrics based on information theory, geometry, and invariance to input perturbations. Our framework highlights how each model layer balances information compression and signal preservation, revealing why mid-depth embeddings can exceed the last layer’s performance. Through extensive experiments on 32 text-embedding tasks across various architectures (transformers, state-space models) and domains (language, vision), we demonstrate that intermediate layers consistently provide stronger features, challenging the standard view on final-layer embeddings and opening new directions on using mid-layer representations for more robust and accurate representations. | large language model, entropy, augmentation, intermediate layer, vision transformer | An investigation into the quality and characteristics of intermediate LLM layers | 12,891 | 2502.02013 | [
-0.026006441563367844,
-0.02086828462779522,
-0.007215751800686121,
0.04836123436689377,
0.03967675566673279,
0.018962277099490166,
0.009429673664271832,
0.009220421314239502,
-0.013386423699557781,
-0.011372769251465797,
-0.03507737070322037,
0.0190715454518795,
-0.024339398369193077,
-0.... | https://github.com/OFSkean/information_flow |
23 | The dark side of the forces: assessing non-conservative force models for atomistic machine learning | https://openreview.net/forum?id=OEl3L8osas | [
"Filippo Bigi",
"Marcel F. Langer",
"Michele Ceriotti"
] | Oral | The use of machine learning to estimate the energy of a group of atoms, and the forces that drive them to more stable configurations, have revolutionized the fields of computational chemistry and materials discovery.
In this domain, rigorous enforcement of symmetry and conservation laws has traditionally been considered essential. For this reason, interatomic forces are usually computed as the derivatives of the potential energy, ensuring energy conservation.
Several recent works have questioned this physically constrained approach, suggesting that directly predicting the forces yields a better trade-off between accuracy and computational efficiency -- and that energy conservation can be learned during training.
This work investigates the applicability of such non-conservative models in microscopic simulations.
We identify and demonstrate several fundamental issues, from ill-defined convergence of geometry optimization to instability in various types of molecular dynamics.
Contrary to the case of rotational symmetry, energy conservation is hard to learn, monitor, and correct for.
The best approach to exploit the acceleration afforded by direct force prediction might be to use it in tandem with a conservative model, reducing -- rather than eliminating -- the additional cost of backpropagation, but avoiding the pathological behavior associated with non-conservative forces. | geometric machine learning, energy conservation, atomistic modelling, molecular dynamics, statistical mechanics | An assessment of non-energy-conserving geometric machine learning models for atomic-scale systems | 12,593 | 2412.11569 | [
-0.03452867642045021,
-0.00976346805691719,
-0.015490608289837837,
0.02492554485797882,
0.0373053215444088,
-0.007992483675479889,
0.015757368877530098,
0.004835061263293028,
-0.04769488796591759,
-0.04791427403688431,
0.01646692305803299,
0.01679636724293232,
-0.04162917286157608,
0.01870... | null |
24 | AdaSplash: Adaptive Sparse Flash Attention | https://openreview.net/forum?id=OWIPDWhUcO | [
"Nuno Gonçalves",
"Marcos V Treviso",
"Andre Martins"
] | Oral | The computational cost of softmax-based attention in transformers limits their applicability to long-context tasks. Adaptive sparsity, of which $\alpha$-entmax attention is an example, offers a flexible data-dependent alternative, but existing implementations are inefficient and do not leverage the sparsity to obtain runtime and memory gains. In this work, we propose AdaSplash, which combines the efficiency of GPU-optimized algorithms with the sparsity benefits of $\alpha$-entmax. We first introduce a hybrid Halley-bisection algorithm, resulting in a 7-fold reduction in the number of iterations needed to compute the $\alpha$-entmax transformation. Then, we implement custom Triton kernels to efficiently handle adaptive sparsity. Experiments with RoBERTa and ModernBERT for text classification and single-vector retrieval, along with GPT-2 for language modeling, show that our method achieves substantial improvements in runtime and memory efficiency compared to existing $\alpha$-entmax implementations. It approaches---and in some cases surpasses---the efficiency of highly optimized softmax implementations like FlashAttention-2, enabling long-context training while maintaining strong task performance. | Sparse Attention, Flash Attention, Adaptive Sparsity, Long Context Transformers | An efficient flash attention implementation for adaptive sparsity. | 12,577 | 2502.12082 | [
-0.032633934170007706,
-0.02573561854660511,
0.01130899041891098,
0.04579123109579086,
0.00015240539505612105,
0.025791626423597336,
0.025826113298535347,
0.03706442564725876,
-0.03386441245675087,
-0.03210118040442467,
-0.046713780611753464,
0.030590010806918144,
-0.04805473983287811,
0.0... | https://github.com/deep-spin/adasplash |
25 | Temporal Difference Flows | https://openreview.net/forum?id=j6H7c3aQyb | [
"Jesse Farebrother",
"Matteo Pirotta",
"Andrea Tirinzoni",
"Remi Munos",
"Alessandro Lazaric",
"Ahmed Touati"
] | Oral | Predictive models of the future are fundamental for an agent's ability to reason and plan. A common strategy learns a world model and unrolls it step-by-step at inference, where small errors can rapidly compound. Geometric Horizon Models (GHMs) offer a compelling alternative by directly making predictions of future states, avoiding cumulative inference errors. While GHMs can be conveniently learned by a generative analog to temporal difference (TD) learning, existing methods are negatively affected by bootstrapping predictions at train time and struggle to generate high-quality predictions at long horizons. This paper introduces Temporal Difference Flows (TD-Flow), which leverages the structure of a novel Bellman equation on probability paths alongside flow-matching techniques to learn accurate GHMs at over 5x the horizon length of prior methods. Theoretically, we establish a new convergence result and primarily attribute TD-Flow's efficacy to reduced gradient variance during training. We further show that similar arguments can be extended to diffusion-based methods. Empirically, we validate TD-Flow across a diverse set of domains on both generative metrics and downstream tasks, including policy evaluation. Moreover, integrating TD-Flow with recent behavior foundation models for planning over policies demonstrates substantial performance gains, underscoring its promise for long-horizon decision-making. | Reinforcement Learning, Geometric Horizon Model, Gamma-Model, Temporal Difference Learning, Successor Measure, Flow Matching | null | 12,288 | 2503.09817 | [
-0.008205203339457512,
-0.014231509529054165,
-0.016766242682933807,
0.05690567195415497,
0.0355713777244091,
0.029459649696946144,
0.018935445696115494,
0.031183460727334023,
-0.020828183740377426,
-0.044238731265068054,
-0.007117788773030043,
-0.02141254022717476,
-0.0682157427072525,
-0... | null |
26 | Score Matching with Missing Data | https://openreview.net/forum?id=mBstuGUaXo | [
"Josh Givens",
"Song Liu",
"Henry Reeve"
] | Oral | Score matching is a vital tool for learning the distribution of data with applications across many areas including diffusion processes, energy based modelling, and graphical model estimation. Despite all these applications, little work explores its use when data is incomplete. We address this by adapting score matching (and its major extensions) to work with missing data in a flexible setting where data can be partially missing over any subset of the coordinates. We provide two separate score matching variations for general use, an importance weighting (IW) approach, and a variational approach. We provide finite sample bounds for our IW approach in finite domain settings and show it to have especially strong performance in small sample lower dimensional cases. Complementing this, we show our variational approach to be strongest in more complex high-dimensional settings which we demonstrate on graphical model estimation tasks on both real and simulated data. | Score Matching, Missing Data, Variational Inference | We adapt score matching to missing data and demonstrate its general applicability by applying it to graphical model estimation. | 12,176 | 2506.00557 | [
-0.03508879989385605,
-0.012734951451420784,
0.009348061867058277,
0.07846322655677795,
0.061772655695676804,
0.04189708083868027,
0.005374596454203129,
-0.017704281955957413,
-0.02730174921452999,
-0.0655917152762413,
-0.0006144159706309438,
0.009631562046706676,
-0.06835714727640152,
-0.... | https://github.com/joshgivens/ScoreMatchingwithMissingData |
27 | Roll the dice & look before you leap: Going beyond the creative limits of next-token prediction | https://openreview.net/forum?id=Hi0SyHMmkd | [
"Vaishnavh Nagarajan",
"Chen Henry Wu",
"Charles Ding",
"Aditi Raghunathan"
] | Oral | We design a suite of minimal algorithmic tasks that are a loose abstraction of _open-ended_ real-world tasks. This allows us to cleanly and controllably quantify the creative limits of the present-day language model.
Much like real-world tasks that require a creative, far-sighted leap of thought, our tasks require an implicit, open-ended _stochastic_ planning step that either (a) discovers new connections in an abstract knowledge graph (like in wordplay, drawing analogies, or research) or (b) constructs new patterns (like in designing math problems or new proteins). In these tasks, we empirically and conceptually argue how next-token learning is myopic and memorizes excessively; multi-token approaches, namely teacherless training and diffusion models, comparatively excel in producing diverse and original output.
Secondly, to elicit randomness without hurting coherence, we find that injecting noise at the input layer (dubbed _seed-conditioning_) works surprisingly as well as (and in some conditions, better than) temperature sampling from the output layer. Thus, our work offers a principled, minimal test-bed for analyzing open-ended creative skills, and offers new arguments for going beyond next-token learning and temperature sampling. We make part of the code available under https://github.com/chenwu98/algorithmic-creativity | next-token prediction, multi-token prediction, creativity | We design a series of open-ended algorithmic tasks inspired by creative tasks and show that multi-token prediction and seed-conditioning lead to much more creative planning than next-token prediction. | 12,175 | 2504.15266 | [
-0.01032083947211504,
-0.013407246209681034,
-0.015579601749777794,
0.047970764338970184,
0.04064083844423294,
0.007713907398283482,
0.013698582537472248,
0.025523610413074493,
-0.010253200307488441,
-0.014853627420961857,
-0.016113806515932083,
0.007540527731180191,
-0.04518773406744003,
... | https://github.com/ChenWu98/algorithmic-creativity |
28 | How Do Large Language Monkeys Get Their Power (Laws)? | https://openreview.net/forum?id=QqVZ28qems | [
"Rylan Schaeffer",
"Joshua Kazdan",
"John Hughes",
"Jordan Juravsky",
"Sara Price",
"Aengus Lynch",
"Erik Jones",
"Robert Kirk",
"Azalia Mirhoseini",
"Sanmi Koyejo"
] | Oral | Recent research across mathematical problem solving, proof assistant programming and multimodal jailbreaking documents a striking finding: when (multimodal) language model tackle a suite of tasks with multiple attempts per task -- succeeding if any attempt is correct -- then the negative log of the average success rate scales a power law in the number of attempts.
In this work, we identify an apparent puzzle: a simple mathematical calculation predicts that on each problem, the failure rate should fall exponentially with the number of attempts.
We confirm this prediction empirically, raising a question: from where does aggregate polynomial scaling emerge?
We then answer this question by demonstrating per-problem exponential scaling can be made consistent with aggregate polynomial scaling if the distribution of single-attempt success probabilities is heavy tailed such that a small fraction of tasks with extremely low success probabilities collectively warp the aggregate success trend into a power law - even as each problem scales exponentially on its own.
We further demonstrate that this distributional perspective explains previously observed deviations from power law scaling, and provides a simple method for forecasting the power law exponent with an order of magnitude lower relative error, or equivalently, ${\sim}2-4$ orders of magnitude less inference compute.
Overall, our work contributes to a better understanding of how neural language model performance improves with scaling inference compute and the development of scaling-predictable evaluations of (multimodal) language models. | scaling laws, inference compute, scaling inference compute, language models, evaluations, scaling-predictable evaluations | null | 12,080 | 2502.17578 | [
-0.031062928959727287,
-0.0013453139690682292,
-0.010926391929388046,
0.016029799357056618,
0.03916487097740173,
0.031803280115127563,
0.06212916225194931,
0.04069236293435097,
-0.058745820075273514,
0.0029771628323942423,
0.01877235807478428,
0.006440302357077599,
-0.06285161525011063,
-0... | https://github.com/RylanSchaeffer/KoyejoLab-Large-How-Do-Language-Monkey-Power-Get-Their-Power |
29 | Addressing Misspecification in Simulation-based Inference through Data-driven Calibration | https://openreview.net/forum?id=y3d4Bs2r7r | [
"Antoine Wehenkel",
"Juan L. Gamella",
"Ozan Sener",
"Jens Behrmann",
"Guillermo Sapiro",
"Joern-Henrik Jacobsen",
"marco cuturi"
] | Oral | Driven by steady progress in deep generative modeling, simulation-based inference (SBI) has emerged as the workhorse for inferring the parameters of stochastic simulators. However, recent work has demonstrated that model misspecification can harm SBI's reliability, preventing its adoption in important applications where only misspecified simulators are available.
This work introduces robust posterior estimation (RoPE), a framework that overcomes model misspecification with a small real-world calibration set of ground truth parameter measurements.
We formalize the misspecification gap as the solution of an optimal transport (OT) problem between learned representations of real-world and simulated observations, allowing RoPE to learn a model of the misspecification without placing additional assumptions on its nature. RoPE shows how the calibration set and OT together offer a controllable balance between calibrated uncertainty and informative inference even under severely misspecified simulators. Results on four synthetic tasks and two real-world problems with ground-truth labels demonstrate that RoPE outperforms baselines and consistently returns informative and calibrated credible intervals. | Simulation-based Inference, Misspecification, Optimal Transport, Bayesian Inference, Neural Posterior Estimation, Robust Inference | null | 11,889 | 2405.08719 | [
-0.020830919966101646,
0.0005135148530825973,
-0.023213397711515427,
0.047280412167310715,
0.04517439007759094,
0.05152469128370285,
0.05856301635503769,
-0.028203090652823448,
-0.036936499178409576,
-0.057177238166332245,
0.02932669036090374,
0.01175543013960123,
-0.06431825459003448,
0.0... | null |
30 | A Unified Framework for Entropy Search and Expected Improvement in Bayesian Optimization | https://openreview.net/forum?id=LbJQYNSH41 | [
"Nuojin Cheng",
"Leonard Papenmeier",
"Stephen Becker",
"Luigi Nardi"
] | Oral | Bayesian optimization is a widely used method for optimizing expensive black-box functions, with Expected Improvement being one of the most commonly used acquisition functions. In contrast, information-theoretic acquisition functions aim to reduce uncertainty about the function’s optimum and are often considered fundamentally distinct from EI. In this work, we challenge this prevailing perspective by introducing a unified theoretical framework, Variational Entropy Search, which reveals that EI and information-theoretic acquisition functions are more closely related than previously recognized. We demonstrate that EI can be interpreted as a variational inference approximation of the popular information-theoretic acquisition function, named Max-value Entropy Search. Building on this insight, we propose VES-Gamma, a novel acquisition function that balances the strengths of EI and MES. Extensive empirical evaluations across both low- and high-dimensional synthetic and real-world benchmarks demonstrate that VES-Gamma is competitive with state-of-the-art acquisition functions and in many cases outperforms EI and MES. | Bayesian optimization, Entropy search, Variational inference, Acquisition function | We propose a unified framework that bridges entropy search and expected improvement, enhancing expected improvement for improved performance. | 11,682 | 2501.18756 | [
-0.006754751317203045,
0.027685968205332756,
-0.0025930432602763176,
0.033487237989902496,
0.04000762477517128,
0.05135595425963402,
0.01708454079926014,
-0.013318556360900402,
-0.013452869839966297,
-0.044828541576862335,
-0.008138779550790787,
0.03510747477412224,
-0.03511787950992584,
0... | https://github.com/NUOJIN/variational-entropy-search |
31 | Conformal Prediction as Bayesian Quadrature | https://openreview.net/forum?id=PNmkjIzHB7 | [
"Jake C. Snell",
"Thomas L. Griffiths"
] | Oral | As machine learning-based prediction systems are increasingly used in high-stakes situations, it is important to understand how such predictive models will perform upon deployment. Distribution-free uncertainty quantification techniques such as conformal prediction provide guarantees about the loss black-box models will incur even when the details of the models are hidden. However, such methods are based on frequentist probability, which unduly limits their applicability. We revisit the central aspects of conformal prediction from a Bayesian perspective and thereby illuminate the shortcomings of frequentist guarantees. We propose a practical alternative based on Bayesian quadrature that provides interpretable guarantees and offers a richer representation of the likely range of losses to be observed at test time. | bayesian quadrature, probabilistic numerics, conformal prediction, distribution-free uncertainty quantification | We propose an alternative to conformal prediction based on Bayesian quadrature that produces a distribution over test-time risk. | 11,432 | 2502.13228 | [
0.01858520694077015,
-0.002762889489531517,
0.0006300908280536532,
0.02871488220989704,
0.055223312228918076,
-0.001418139785528183,
0.009991664439439774,
-0.01981346495449543,
-0.031169258058071136,
-0.041913118213415146,
-0.011241389438509941,
0.002461442956700921,
-0.08134104311466217,
... | https://github.com/jakesnell/conformal-as-bayes-quad |
32 | Learning Time-Varying Multi-Region Brain Communications via Scalable Markovian Gaussian Processes | https://openreview.net/forum?id=pOAEfqa26i | [
"Weihan Li",
"Yule Wang",
"Chengrui Li",
"Anqi Wu"
] | Oral | Understanding and constructing brain communications that capture dynamic communications across multiple regions is fundamental to modern system neuroscience, yet current methods struggle to find time-varying region-level communications or scale to large neural datasets with long recording durations. We present a novel framework using Markovian Gaussian Processes to learn brain communications with time-varying temporal delays from multi-region neural recordings, named Adaptive Delay Model (ADM). Our method combines Gaussian Processes with State Space Models and employs parallel scan inference algorithms, enabling efficient scaling to large datasets while identifying concurrent communication patterns that evolve over time. This time-varying approach captures how brain region interactions shift dynamically during cognitive processes. Validated on synthetic and multi-region neural recordings datasets, our approach discovers both the directionality and temporal dynamics of neural communication. This work advances our understanding of distributed neural computation and provides a scalable tool for analyzing dynamic brain networks. Code is available at https://github.com/BRAINML-GT/Adaptive-Delay-Model. | Multiple Brain Region Communications; Markovian Gaussian Processes; State Space Model | We developed a scalable method using Markovian Gaussian Processes (State Space Model) to track how multiple brain regions communicate dynamically over time. | 11,292 | 2407.00397 | [
-0.03241768851876259,
0.010383646003901958,
-0.027385404333472252,
0.004705025814473629,
0.0432833768427372,
0.025788938626646996,
0.0378355048596859,
0.014515249989926815,
-0.03916799649596214,
-0.07149074226617813,
-0.0012040550354868174,
0.011307216249406338,
-0.05128505453467369,
0.016... | https://github.com/BRAINML-GT/Adaptive-Delay-Model |
33 | Fully Dynamic Euclidean Bi-Chromatic Matching in Sublinear Update Time | https://openreview.net/forum?id=up21Rwj5Fo | [
"Gramoz Goranci",
"Peter Kiss",
"Neel Patel",
"Martin P. Seybold",
"Eva Szilagyi",
"Da Wei Zheng"
] | Oral | We consider the Euclidean bi-chromatic matching problem in the dynamic setting, where the goal is to efficiently process point insertions and deletions while maintaining a high-quality solution. Computing the minimum cost bi-chromatic matching is one of the core problems in geometric optimization that has found many applications, most notably in estimating Wasserstein distance between two distributions. In this work, we present the first fully dynamic algorithm for Euclidean bi-chromatic matching with sublinear update time. For any fixed $\varepsilon > 0$, our algorithm achieves $O(1/\varepsilon)$-approximation and handles updates in $O(n^{\varepsilon})$ time. Our experiments show that our algorithm enables effective monitoring of the distributional drift in the Wasserstein distance on real and synthetic data sets, while outperforming the runtime of baseline approximations by orders of magnitudes. | Euclidean bi-chromatic matching, dynamic algorithm, 1-Wasserstein distance | null | 11,181 | 2505.09010 | [
-0.01129536610096693,
-0.017938166856765747,
0.006859261076897383,
0.036380063742399216,
0.04205693304538727,
0.07222167402505875,
0.00739015219733119,
0.02645602449774742,
-0.03233019635081291,
-0.07564080506563187,
-0.02647995389997959,
-0.028999997302889824,
-0.07135188579559326,
0.0120... | null |
34 | The Value of Prediction in Identifying the Worst-Off | https://openreview.net/forum?id=26JsumCG0z | [
"Unai Fischer-Abaigar",
"Christoph Kern",
"Juan Carlos Perdomo"
] | Oral | Machine learning is increasingly used in government programs to identify and support the most vulnerable individuals, prioritizing assistance for those at greatest risk over optimizing aggregate outcomes. This paper examines the welfare impacts of prediction in equity-driven contexts, and how they compare to other policy levers, such as expanding bureaucratic capacity. Through mathematical models and a real-world case study on long-term unemployment amongst German residents, we develop a comprehensive understanding of the relative effectiveness of prediction in surfacing the worst-off. Our findings provide clear analytical frameworks and practical, data-driven tools that empower policymakers to make principled decisions when designing these systems. | algorithmic decision making, resource allocation, machine learning and public policy | null | 11,119 | 2501.19334 | [
-0.01244431734085083,
-0.03205063194036484,
-0.011076499707996845,
0.010298693552613258,
0.05924583226442337,
0.03605145588517189,
0.01330631598830223,
-0.02589959092438221,
-0.031014658510684967,
-0.0326215960085392,
-0.030589541420340538,
0.009308193810284138,
-0.06259553879499435,
-0.00... | null |
35 | Learning dynamics in linear recurrent neural networks | https://openreview.net/forum?id=KGOcrIWYnx | [
"Alexandra Maria Proca",
"Clémentine Carla Juliette Dominé",
"Murray Shanahan",
"Pedro A. M. Mediano"
] | Oral | Recurrent neural networks (RNNs) are powerful models used widely in both machine learning and neuroscience to learn tasks with temporal dependencies and to model neural dynamics. However, despite significant advancements in the theory of RNNs, there is still limited understanding of their learning process and the impact of the temporal structure of data. Here, we bridge this gap by analyzing the learning dynamics of linear RNNs (LRNNs) analytically, enabled by a novel framework that accounts for task dynamics. Our mathematical result reveals four key properties of LRNNs: (1) Learning of data singular values is ordered by both scale and temporal precedence, such that singular values that are larger and occur later are learned faster. (2) Task dynamics impact solution stability and extrapolation ability. (3) The loss function contains an effective regularization term that incentivizes small weights and mediates a tradeoff between recurrent and feedforward computation. (4) Recurrence encourages feature learning, as shown through a novel derivation of the neural tangent kernel for finite-width LRNNs. As a final proof-of-concept, we apply our theoretical framework to explain the behavior of LRNNs performing sensory integration tasks. Our work provides a first analytical treatment of the relationship between the temporal dependencies in tasks and learning dynamics in LRNNs, building a foundation for understanding how complex dynamic behavior emerges in cognitive models. | deep learning, learning dynamics, RNNs, rich and lazy learning, teacher-student, neuroscience | We study the learning dynamics of linear RNNs using a novel framework that accounts for task dynamics. | 11,083 | null | [
-0.02129901945590973,
0.005219988990575075,
-0.005816278979182243,
0.030236227437853813,
0.05530635267496109,
0.060753338038921356,
0.026367289945483208,
0.017647719010710716,
-0.07115199416875839,
-0.035384830087423325,
-0.005333644337952137,
0.013746881857514381,
-0.05385548993945122,
0.... | https://github.com/aproca/LRNN_dynamics |
36 | AffectGPT: A New Dataset, Model, and Benchmark for Emotion Understanding with Multimodal Large Language Models | https://openreview.net/forum?id=xmbdACI0xu | [
"Zheng Lian",
"Haoyu Chen",
"Lan Chen",
"Haiyang Sun",
"Licai Sun",
"Yong Ren",
"Zebang Cheng",
"Bin Liu",
"Rui Liu",
"Xiaojiang Peng",
"Jiangyan Yi",
"Jianhua Tao"
] | Oral | The emergence of multimodal large language models (MLLMs) advances multimodal emotion recognition (MER) to the next level—from naive discriminative tasks to complex emotion understanding with advanced video understanding abilities and natural language description. However, the current community suffers from a lack of large-scale datasets with intensive, descriptive emotion annotations, as well as a multimodal-centric framework to maximize the potential of MLLMs for emotion understanding. To address this, we establish a new benchmark for MLLM-based emotion understanding with a novel dataset (MER-Caption) and a new model (AffectGPT). Utilizing our model-based crowd-sourcing data collection strategy, we construct the largest descriptive emotion dataset to date (by far), featuring over 2K fine-grained emotion categories across 115K samples. We also introduce the AffectGPT model, designed with pre-fusion operations to enhance multimodal integration. Finally, we present MER-UniBench, a unified benchmark with evaluation metrics tailored for typical MER tasks and the free-form, natural language output style of MLLMs. Extensive experimental results show AffectGPT's robust performance across various MER tasks. We have released both the code and the dataset to advance research and development in emotion understanding: https://github.com/zeroQiaoba/AffectGPT. | multimodal emotion recognition, AffectGPT, MER-Caption, MER-UniBench | null | 11,009 | 2501.16566 | [
0.005094279069453478,
-0.034526653587818146,
0.015455083921551704,
0.03082922101020813,
-0.0086019616574049,
0.012943780049681664,
0.0280462596565485,
0.010429857298731804,
0.0009863937739282846,
-0.021937742829322815,
-0.03519713506102562,
0.014350843615829945,
-0.0737331211566925,
-0.024... | https://github.com/zeroQiaoba/AffectGPT |
37 | DeFoG: Discrete Flow Matching for Graph Generation | https://openreview.net/forum?id=KPRIwWhqAZ | [
"Yiming QIN",
"Manuel Madeira",
"Dorina Thanou",
"Pascal Frossard"
] | Oral | Graph generative models are essential across diverse scientific domains by capturing complex distributions over relational data. Among them, graph diffusion models achieve superior performance but face inefficient sampling and limited flexibility due to the tight coupling between training and sampling stages. We introduce DeFoG, a novel graph generative framework that disentangles sampling from training, enabling a broader design space for more effective and efficient model optimization. DeFoG employs a discrete flow-matching formulation that respects the inherent symmetries of graphs. We theoretically ground this disentangled formulation by explicitly relating the training loss to the sampling algorithm and showing that DeFoG faithfully replicates the ground truth graph distribution. Building on these foundations, we thoroughly investigate DeFoG's design space and propose novel sampling methods that significantly enhance performance and reduce the required number of refinement steps. Extensive experiments demonstrate state-of-the-art performance across synthetic, molecular, and digital pathology datasets, covering both unconditional and conditional generation settings. It also outperforms most diffusion-based models with just 5–10\% of their sampling steps. | Graph Generation, Flow Matching | We propose DeFoG, a discrete flow matching-based framework for graph generation with improved sampling efficiency and state-of-the-art performance across synthetic and molecular datasets. | 10,540 | 2410.04263 | [
0.003417256288230419,
0.0028058583848178387,
-0.011744302697479725,
0.06101195141673088,
0.04516930133104324,
0.028944414108991623,
0.023233691230416298,
0.014173071831464767,
0.016261132434010506,
-0.06322021037340164,
0.02281586267054081,
-0.0303906612098217,
-0.08233510702848434,
0.0075... | https://github.com/manuelmlmadeira/DeFoG |
38 | One-Step Generalization Ratio Guided Optimization for Domain Generalization | https://openreview.net/forum?id=Tv2JDGw920 | [
"Sumin Cho",
"Dongwon Kim",
"Kwangsu Kim"
] | Oral | Domain Generalization (DG) aims to train models that generalize to unseen target domains but often overfit to domain-specific features, known as undesired correlations. Gradient-based DG methods typically guide gradients in a dominant direction but often inadvertently reinforce spurious correlations. Recent work has employed dropout to regularize overconfident parameters, but has not explicitly adjusted gradient alignment or ensured balanced parameter updates. We propose GENIE (Generalization-ENhancing Iterative Equalizer), a novel optimizer that leverages the One-Step Generalization Ratio (OSGR) to quantify each parameter's contribution to loss reduction and assess gradient alignment. By dynamically equalizing OSGR via a preconditioning factor, GENIE prevents a small subset of parameters from dominating optimization, thereby promoting domain-invariant feature learning. Theoretically, GENIE balances convergence contribution and gradient alignment among parameters, achieving higher OSGR while retaining SGD's convergence rate. Empirically, it outperforms existing optimizers and enhances performance when integrated with various DG and single-DG methods. | Domain Generalization, Optimization, Preconditioning, One-Step Generalization Ratio (OSGR), Out-of-Distribution | We propose GENIE, a novel optimizer that leverages the One-Step Generalization Ratio to dynamically balance parameter contributions, mitigating source-domain overfitting and achieving superior generalization performance in Domain Generalization. | 10,381 | null | [
-0.004090493079274893,
-0.004386196378618479,
0.0386023223400116,
0.026364056393504143,
0.03436649590730667,
0.03165575861930847,
0.06266896426677704,
-0.02846449241042137,
-0.008976617828011513,
-0.03886710852384567,
-0.0027551068924367428,
0.005210195202380419,
-0.0797378346323967,
0.009... | https://github.com/00ssum/GENIE |
39 | CodeIO: Condensing Reasoning Patterns via Code Input-Output Prediction | https://openreview.net/forum?id=feIaF6vYFl | [
"Junlong Li",
"Daya Guo",
"Dejian Yang",
"Runxin Xu",
"Yu Wu",
"Junxian He"
] | Oral | Reasoning is a fundamental capability of Large Language Models. While prior research predominantly focuses on enhancing narrow skills like math or code generation, improving performance on many other reasoning tasks remains challenging due to sparse and fragmented training data. To address this issue, we propose CodeI/O, a novel approach that systematically condenses diverse reasoning patterns inherently embedded in contextually-grounded codes, through transforming the original code into a code input-output prediction format. By training models to predict inputs/outputs given code and test cases entirely in natural language as Chain-of-Thought (CoT) rationales, we expose them to universal reasoning primitives—like logic flow planning, state-space searching, decision tree traversal, and modular decomposition—while decoupling structured reasoning from code-specific syntax and preserving procedural rigor. Experimental results demonstrate CodeI/O leads to consistent improvements across symbolic, scientific, logic, math & numerical, and commonsense reasoning tasks. By matching the existing ground-truth outputs or re-executing the code with predicted inputs, we can verify each prediction and further enhance the CoTs through multi-turn revision, resulting in CodeI/O++ and achieving higher performance. Our data and models will be publicly available. | Large Language Models, Reasoning, Code Execution | We teach the models to predict code inputs and outputs to improve their general reasoning ability. | 9,627 | null | [
0.0007106206030584872,
-0.026523977518081665,
-0.01792941242456436,
0.03797358274459839,
0.06655416637659073,
0.0296577550470829,
0.013570168986916542,
0.0163095835596323,
-0.022214587777853012,
-0.021674156188964844,
-0.0300573892891407,
0.028456632047891617,
-0.0593300387263298,
-0.01204... | https://github.com/hkust-nlp/CodeIO |
40 | In-Context Denoising with One-Layer Transformers: Connections between Attention and Associative Memory Retrieval | https://openreview.net/forum?id=F08lzoBgad | [
"Matthew Smart",
"Alberto Bietti",
"Anirvan M. Sengupta"
] | Oral | We introduce in-context denoising, a task that refines the connection between attention-based architectures and dense associative memory (DAM) networks, also known as modern Hopfield networks. Using a Bayesian framework, we show theoretically and empirically that certain restricted denoising problems can be solved optimally even by a single-layer transformer. We demonstrate that a trained attention layer processes each denoising prompt by performing a single gradient descent update on a context-aware DAM energy landscape, where context tokens serve as associative memories and the query token acts as an initial state. This one-step update yields better solutions than exact retrieval of either a context token or a spurious local minimum, providing a concrete example of DAM networks extending beyond the standard retrieval paradigm. Overall, this work solidifies the link between associative memory and attention mechanisms first identified by Ramsauer et al., and demonstrates the relevance of associative memory models in the study of in-context learning. | attention, in-context learning, denoising, associative memory, Hopfield network, transformers | We show that one-layer transformers perform optimal in-context denoising through a single step of context-dependent associative memory inference. | 9,353 | 2502.05164 | [
-0.015932908281683922,
0.03301049396395683,
-0.008351082913577557,
0.04378288984298706,
0.009350565262138844,
0.022312836721539497,
0.014035016298294067,
0.018924318253993988,
-0.0504203625023365,
-0.03291743993759155,
-0.012440879829227924,
0.01617427170276642,
-0.04831486567854881,
-0.02... | https://github.com/mattsmart/in-context-denoising |
41 | Foundation Model Insights and a Multi-Model Approach for Superior Fine-Grained One-shot Subset Selection | https://openreview.net/forum?id=ZdqTePSV1K | [
"Zhijing Wan",
"Zhixiang Wang",
"Zheng Wang",
"Xin Xu",
"Shin'ichi Satoh"
] | Oral | One-shot subset selection serves as an effective tool to reduce deep learning training costs by identifying an informative data subset based on the information extracted by an information extractor (IE). Traditional IEs, typically pre-trained on the target dataset, are inherently dataset-dependent. Foundation models (FMs) offer a promising alternative, potentially mitigating this limitation. This work investigates two key questions: (1) Can FM-based subset selection outperform traditional IE-based methods across diverse datasets? (2) Do all FMs perform equally well as IEs for subset selection? Extensive experiments uncovered surprising insights: FMs consistently outperform traditional IEs on fine-grained datasets, whereas their advantage diminishes on coarse-grained datasets with noisy labels. Motivated by these finding, we propose RAM-APL (RAnking Mean-Accuracy of Pseudo-class Labels), a method tailored for fine-grained image datasets. RAM-APL leverages multiple FMs to enhance subset selection by exploiting their complementary strengths. Our approach achieves state-of-the-art performance on fine-grained datasets, including Oxford-IIIT Pet, Food-101, and Caltech-UCSD Birds-200-2011. | one-shot subset selection, foundation models, data-efficient learning | This paper investigates the effectiveness of using foundation models (FMs) as information extractor for one-shot subset selection on a set of image datasets, and proposes a novel multi-foundation-model subset selection method called RAM-APL. | 9,286 | 2506.14473 | [
-0.018447456881403923,
-0.02497456781566143,
-0.011003893800079823,
0.026657620444893837,
0.0618777871131897,
0.03184109181165695,
0.003223702311515808,
0.010499550960958004,
-0.033598583191633224,
-0.053819842636585236,
0.01240966934710741,
0.005014335736632347,
-0.040420785546302795,
0.0... | https://github.com/zhijingwan/ram-apl |
42 | Beyond Matryoshka: Revisiting Sparse Coding for Adaptive Representation | https://openreview.net/forum?id=z19u9B2fCZ | [
"Tiansheng Wen",
"Yifei Wang",
"Zequn Zeng",
"Zhong Peng",
"Yudi Su",
"Xinyang Liu",
"Bo Chen",
"Hongwei Liu",
"Stefanie Jegelka",
"Chenyu You"
] | Oral | Many large-scale systems rely on high-quality deep representations (embeddings) to facilitate tasks like retrieval, search, and generative modeling. Matryoshka Representation Learning (MRL) recently emerged as a solution for adaptive embedding lengths, but it requires full model retraining and suffers from noticeable performance degradations at short lengths. In this paper, we show that *sparse coding* offers a compelling alternative for achieving adaptive representation with minimal overhead and higher fidelity. We propose **Contrastive Sparse Representation** (**CSR**), a method that specifies pre-trained embeddings into a high-dimensional but *selectively activated* feature space. By leveraging lightweight autoencoding and task-aware contrastive objectives, CSR preserves semantic quality while allowing flexible, cost-effective inference at different sparsity levels. Extensive experiments on image, text, and multimodal benchmarks demonstrate that CSR consistently outperforms MRL in terms of both accuracy and retrieval speed—often by large margins—while also cutting training time to a fraction of that required by MRL. Our results establish sparse coding as a powerful paradigm for adaptive representation learning in real-world applications where efficiency and fidelity are both paramount. Code is available at [this URL.](https://github.com/neilwen987/CSR_Adaptive_Rep) | Sparse Coding;Matryoshka representation learning;Adaptive Representation;Efficient Machine Learning; Sparse Autoencoder | A novel framework that learns high-fidelity sparse embeddings for efficient representation | 8,786 | 2503.01776 | [
0.005121247377246618,
-0.025193165987730026,
-0.0008697225712239742,
0.039170123636722565,
0.03418145328760147,
0.031752847135066986,
0.030723612755537033,
0.028378846123814583,
-0.06792884320020676,
-0.03846162557601929,
-0.01677948795258999,
-0.01652376726269722,
-0.06025973707437515,
0.... | https://github.com/neilwen987/CSR_Adaptive_Rep |
43 | Prices, Bids, Values: One ML-Powered Combinatorial Auction to Rule Them All | https://openreview.net/forum?id=4ViG4gQD3i | [
"Ermis Soumalias",
"Jakob Heiss",
"Jakob Weissteiner",
"Sven Seuken"
] | Oral | We study the design of *iterative combinatorial auctions (ICAs)*.
The main challenge in this domain is that the bundle space grows exponentially in the number of items.
To address this, recent work has proposed machine learning (ML)-based preference elicitation algorithms that aim to elicit only the most critical information from bidders to maximize efficiency.
However, while the SOTA ML-based algorithms elicit bidders' preferences via *value queries*, ICAs that are used in practice elicit information via *demand queries*.
In this paper, we introduce a novel ML algorithm that provably makes use of the full information from both value and demand queries, and we show via experiments that combining both query types results in significantly better learning performance in practice. Building on these insights, we present MLHCA, a new ML-powered auction that uses value and demand queries. MLHCA significantly outperforms the previous SOTA, reducing efficiency loss by up to a factor 10, with up to 58% fewer queries.
Thus, MLHCA achieves large efficiency improvements while also reducing bidders' cognitive load, establishing a new benchmark for both practicability and efficiency. Our code is available at https://github.com/marketdesignresearch/MLHCA. | Combinatorial Auctions, Auction Design, Auctions, Market Design, Mechanism Design, Game Theory, Spectrum Auctions, Iterative Auctions, Preference Elicitation, Machine Learning, Neural Networks, Deep Learning, Bayesian Optimization, Active Learning, Computational Economics, Demand Queries, Value Queries | Our ML-algorithm dramatically outperforms the SOTA for combinatorial auctions by combining demand queries and value queries. | 8,544 | 2411.09355 | [
-0.011331970803439617,
-0.04652026668190956,
0.00221111997961998,
0.037281766533851624,
0.03058698959648609,
0.03319965675473213,
-0.007097863592207432,
0.0015302159590646625,
-0.011353298090398312,
-0.03520441800355911,
-0.006901825778186321,
0.014165687374770641,
-0.0701884925365448,
-0.... | https://github.com/marketdesignresearch/MLHCA |
44 | Generative Social Choice: The Next Generation | https://openreview.net/forum?id=E1E6T7KHlR | [
"Niclas Boehmer",
"Sara Fish",
"Ariel D. Procaccia"
] | Oral | A key task in certain democratic processes is to produce a concise slate of statements that proportionally represents the full spectrum of user opinions. This task is similar to committee elections, but unlike traditional settings, the candidate set comprises all possible statements of varying lengths, and so it can only be accessed through specific queries. Combining social choice and large language models, prior work has approached this challenge through a framework of generative social choice. We extend the framework in two fundamental ways, providing theoretical guarantees even in the face of approximately optimal queries and a budget limit on the overall length of the slate. Using GPT-4o to implement queries, we showcase our approach on datasets related to city improvement measures and drug reviews, demonstrating its effectiveness in generating representative slates from unstructured user opinions. | Social choice, large language models, committee elections, democratic processes, proportional fairness | null | 8,232 | 2505.22939 | [
-0.005925286095589399,
-0.0313514769077301,
-0.004692776128649712,
0.07571739703416824,
0.005805234890431166,
0.031925659626722336,
-0.004618258215487003,
0.025185780599713326,
-0.00670641241595149,
-0.03300214558839798,
-0.03983606398105621,
0.009749760851264,
-0.08471936732530594,
-0.030... | https://github.com/sara-fish/gen-soc-choice-next-gen |
45 | ITBench: Evaluating AI Agents across Diverse Real-World IT Automation Tasks | https://openreview.net/forum?id=jP59rz1bZk | [
"Saurabh Jha",
"Rohan R. Arora",
"Yuji Watanabe",
"Takumi Yanagawa",
"Yinfang Chen",
"Jackson Clark",
"Bhavya Bhavya",
"Mudit Verma",
"Harshit Kumar",
"Hirokuni Kitahara",
"Noah Zheutlin",
"Saki Takano",
"Divya Pathak",
"Felix George",
"Xinbo Wu",
"Bekir O Turkkan",
"Gerard Vanloo",
... | Oral | Realizing the vision of using AI agents to automate critical IT tasks depends on the ability to measure and understand effectiveness of proposed solutions. We introduce ITBench, a framework that offers a systematic methodology for benchmarking AI agents to address real-world IT automation tasks. Our initial release targets three key areas: Site Reliability Engineering (SRE), Compliance and Security Operations (CISO), and Financial Operations (FinOps). The design enables AI researchers to understand the challenges and opportunities of AI agents for IT automation with push-button workflows and interpretable metrics. IT-Bench includes an initial set of 102 real-world scenarios, which can be easily extended by community contributions. Our results show that agents powered by state-of-the-art models resolve only 11.4% of SRE scenarios, 25.2% of CISO scenarios, and 25.8% of FinOps scenarios (excluding anomaly detection). For FinOps-specific anomaly detection (AD) scenarios, AI agents achieve an F1 score of 0.35. We expect ITBench to be a key enabler of AI-driven IT automation that is correct, safe, and fast. IT-Bench, along with a leaderboard and sample agent implementations, is available at https://github.com/ibm/itbench. | Benchmark, GenAI, Agents, IT Automation | Benchmark for IT automation tasks | 8,021 | 2502.05352 | [
0.009089733473956585,
-0.043322253972291946,
-0.004819262307137251,
0.028642643243074417,
0.05064190924167633,
0.008034205064177513,
0.011148513294756413,
0.009374073706567287,
0.006617363076657057,
-0.03415652737021446,
-0.006981564220041037,
0.020754791796207428,
-0.05296405032277107,
-0... | https://github.com/ibm/itbench |
46 | Theoretical Limitations of Ensembles in the Age of Overparameterization | https://openreview.net/forum?id=Cf0N07E1vu | [
"Niclas Dern",
"John Patrick Cunningham",
"Geoff Pleiss"
] | Oral | Classic ensembles generalize better than any single component model. In contrast, recent empirical studies find that modern ensembles of (overparameterized) neural networks may not provide any inherent generalization advantage over single but larger neural networks. This paper clarifies how modern overparameterized ensembles differ from their classic underparameterized counterparts, using ensembles of random feature (RF) regressors as a basis for developing theory. In contrast to the underparameterized regime, where ensembling typically induces regularization and increases generalization, we prove with minimal assumptions that infinite ensembles of overparameterized RF regressors become pointwise equivalent to (single) infinite-width RF regressors, and finite width ensembles rapidly converge to single models with the same parameter budget. These results, which are exact for ridgeless models and approximate for small ridge penalties, imply that overparameterized ensembles and single large models exhibit nearly identical generalization. We further characterize the predictive variance amongst ensemble members, demonstrating that it quantifies the expected effects of increasing capacity rather than capturing any conventional notion of uncertainty. Our results challenge common assumptions about the advantages of ensembles in overparameterized settings, prompting a reconsideration of how well intuitions from underparameterized ensembles transfer to deep ensembles and the overparameterized regime. | Ensembles, Deep Ensembles, Uncertainty Quantification, Overparameterization, Random feature regression, Kernel regression | We theoretically characterize the generalization and uncertainty properties of overparameterized random feature regressors, proving a functional equivalence between ensembles and single (but larger) models under weak assumptions. | 7,902 | 2410.16201 | [
-0.02355119213461876,
-0.027120936661958694,
-0.012224185280501842,
0.03826025873422623,
0.043106816709041595,
0.02357286773622036,
0.030617447569966316,
0.0032587810419499874,
-0.034452375024557114,
-0.047255463898181915,
0.0007192044286057353,
0.006071353331208229,
-0.0778704285621643,
-... | https://github.com/nic-dern/theoretical-limitations-overparameterized-ensembles |
47 | An analytic theory of creativity in convolutional diffusion models | https://openreview.net/forum?id=ilpL2qACla | [
"Mason Kamb",
"Surya Ganguli"
] | Oral | We obtain an analytic, interpretable and predictive theory of creativity in convolutional diffusion models. Indeed, score-matching diffusion models can generate highly original images that lie far from their training data. However, optimal score-matching theory suggests that these models should only be able to produce memorized training examples. To reconcile this theory-experiment gap, we identify two simple inductive biases, locality and equivariance, that: (1) induce a form of combinatorial creativity by preventing optimal score-matching; (2) result in fully analytic, completely mechanistically interpretable, local score (LS) and equivariant local score (ELS) machines that, (3) after calibrating a single time-dependent hyperparameter can quantitatively predict the outputs of trained convolution only diffusion models (like ResNets and UNets) with high accuracy (median $r^2$ of $0.95, 0.94, 0.94, 0.96$ for our top model on CIFAR10, FashionMNIST, MNIST, and CelebA). Our model reveals a {\it locally consistent patch mosaic} mechanism of creativity, in which diffusion models create exponentially many novel images by mixing and matching different local training set patches at different scales and image locations. Our theory also partially predicts the outputs of pre-trained self-attention enabled UNets (median $r^2 \sim 0.77$ on CIFAR10), revealing an intriguing role for attention in carving out semantic coherence from local patch mosaics. | Diffusion models, Creativity, Inductive Biases, Theory, Interpretability | We obtain an end-to-end analytic, interpretable and predictive theory of creativity in convolutional diffusion model by solving the optimal score-matching problem under the conditions of locality and equivariance. | 7,878 | 2412.20292 | [
0.010545788332819939,
-0.01292910985648632,
-0.0026419602800160646,
0.05136025324463844,
0.0535413958132267,
0.026651691645383835,
0.010594981722533703,
0.016965942457318306,
-0.008480014279484749,
-0.05293968319892883,
-0.00391076784580946,
-0.020247478038072586,
-0.059724532067775726,
0.... | https://github.com/Kambm/convolutional_diffusion |
48 | MGD3 : Mode-Guided Dataset Distillation using Diffusion Models | https://openreview.net/forum?id=NIe74CY9lk | [
"Jeffrey A Chan Santiago",
"praveen tirupattur",
"Gaurav Kumar Nayak",
"Gaowen Liu",
"Mubarak Shah"
] | Oral | Dataset distillation has emerged as an effective strategy, significantly reducing training costs and facilitating more efficient model deployment.
Recent advances have leveraged generative models to distill datasets by capturing the underlying data distribution. Unfortunately, existing methods require model fine-tuning with distillation losses to encourage diversity and representativeness. However, these methods do not guarantee sample diversity, limiting their performance.
We propose a mode-guided diffusion model leveraging a pre-trained diffusion model without the need to fine-tune with distillation losses. Our approach addresses dataset diversity in three stages: Mode Discovery to identify distinct data modes, Mode Guidance to enhance intra-class diversity, and Stop Guidance to mitigate artifacts in synthetic samples that affect performance.
We evaluate our approach on ImageNette, ImageIDC, ImageNet-100, and ImageNet-1K, achieving accuracy improvements of 4.4%, 2.9%, 1.6%, and 1.6%, respectively, over state-of-the-art methods. Our method eliminates the need for fine-tuning diffusion models with distillation losses, significantly reducing computational costs. | Dataset distillation, Dataset Condensation, Diffusion Models | null | 7,693 | 2505.18963 | [
-0.003798037301748991,
-0.025761891156435013,
-0.03486427292227745,
0.09493837505578995,
0.04991694167256355,
0.004301885142922401,
0.02865014784038067,
-0.011210456490516663,
-0.017878729850053787,
-0.054056696593761444,
-0.00400925287976861,
-0.022060969844460487,
-0.05029267072677612,
0... | null |
49 | Near-Optimal Decision Trees in a SPLIT Second | https://openreview.net/forum?id=ACyyBrUioy | [
"Varun Babbar",
"Hayden McTavish",
"Cynthia Rudin",
"Margo Seltzer"
] | Oral | Decision tree optimization is fundamental to interpretable machine learning. The most popular approach is to greedily search for the best feature at every decision point, which is fast but provably suboptimal. Recent approaches find the global optimum using branch and bound with dynamic programming, showing substantial improvements in accuracy and sparsity at great cost to scalability. An ideal solution would have the accuracy of an optimal method and the scalability of a greedy method. We introduce a family of algorithms called SPLIT (SParse Lookahead for Interpretable Trees) that moves us significantly forward in achieving this ideal balance. We demonstrate that not all sub-problems need to be solved to optimality to find high quality trees; greediness suffices near the leaves. Since each depth adds an exponential number of possible trees, this change makes our algorithms orders of magnitude faster than existing optimal methods, with negligible loss in performance. We extend this algorithm to allow scalable computation of sets of near-optimal trees (i.e., the Rashomon set). | Decision Tree Optimization, Interpretable Machine Learning, Discrete Optimization | We find well performing sparse trees, dramatically improving scalability while maintaining SOTA accuracy. | 7,628 | 2502.15988 | [
-0.025810422375798225,
-0.012815546244382858,
-0.008123049512505531,
0.0419137217104435,
0.04307694733142853,
0.03104439750313759,
0.012563131749629974,
-0.022971179336309433,
-0.014208859764039516,
-0.02163822390139103,
-0.029653549194335938,
0.008251373656094074,
-0.06007738038897514,
-0... | https://github.com/VarunBabbar/SPLIT-ICML |
50 | Controlling Underestimation Bias in Constrained Reinforcement Learning for Safe Exploration | https://openreview.net/forum?id=nq5bt0mRTC | [
"Shiqing Gao",
"Jiaxin Ding",
"Luoyi Fu",
"Xinbing Wang"
] | Oral | Constrained Reinforcement Learning (CRL) aims to maximize cumulative rewards while satisfying constraints. However, existing CRL algorithms often encounter significant constraint violations during training, limiting their applicability in safety-critical scenarios. In this paper, we identify the underestimation of the cost value function as a key factor contributing to these violations. To address this issue, we propose the Memory-driven Intrinsic Cost Estimation (MICE) method, which introduces intrinsic costs to mitigate underestimation and control bias to promote safer exploration. Inspired by flashbulb memory, where humans vividly recall dangerous experiences to avoid risks, MICE constructs a memory module that stores previously explored unsafe states to identify high-cost regions. The intrinsic cost is formulated as the pseudo-count of the current state visiting these risk regions. Furthermore, we propose an extrinsic-intrinsic cost value function that incorporates intrinsic costs and adopts a bias correction strategy. Using this function, we formulate an optimization objective within the trust region, along with corresponding optimization methods. Theoretically, we provide convergence guarantees for the proposed cost value function and establish the worst-case constraint violation for the MICE update. Extensive experiments demonstrate that MICE significantly reduces constraint violations while preserving policy performance comparable to baselines. | constrained RL, safe exploration, underestimation, intrinsic cost | null | 7,532 | null | [
-0.05139840394258499,
0.014362325891852379,
-0.03756367415189743,
0.04934417083859444,
0.04883088544011116,
-0.01863854005932808,
0.0462195910513401,
-0.004491435829550028,
-0.036009397357702255,
-0.026735391467809677,
-0.01582258567214012,
0.02910141460597515,
-0.07106010615825653,
-0.037... | https://github.com/ShiqingGao/MICE |
51 | Nonlinearly Preconditioned Gradient Methods under Generalized Smoothness | https://openreview.net/forum?id=kV8oUyjdIg | [
"Konstantinos Oikonomidis",
"Jan Quan",
"Emanuel Laude",
"Panagiotis Patrinos"
] | Oral | We analyze nonlinearly preconditioned gradient methods for solving smooth minimization problems. We introduce a generalized smoothness property, based on the notion of abstract convexity, that is broader than Lipschitz smoothness and provide sufficient first- and second-order conditions. Notably, our framework encapsulates algorithms associated with the gradient clipping method and brings out novel insights for the class of $(L_0,L_1)$-smooth functions that has received widespread interest recently, thus allowing us to extend beyond already established methods. We investigate the convergence of the proposed method in both the convex and nonconvex setting. | nonconvex optimization, generalized smoothness, first-order methods | null | 7,377 | 2502.08532 | [
-0.04779544472694397,
-0.03405274450778961,
0.06987668573856354,
0.05129753053188324,
0.03709825500845909,
0.038539063185453415,
0.023838482797145844,
0.0045199343003332615,
-0.03370634466409683,
-0.06431606411933899,
-0.03597892448306084,
-0.0001519900979474187,
-0.05742815136909485,
-0.0... | null |
52 | Outlier Gradient Analysis: Efficiently Identifying Detrimental Training Samples for Deep Learning Models | https://openreview.net/forum?id=v77ZMzbsBA | [
"Anshuman Chhabra",
"Bo Li",
"Jian Chen",
"Prasant Mohapatra",
"Hongfu Liu"
] | Oral | A core data-centric learning challenge is the identification of training samples that are detrimental to model performance. Influence functions serve as a prominent tool for this task and offer a robust framework for assessing training data influence on model predictions. Despite their widespread use, their high computational cost associated with calculating the inverse of the Hessian matrix pose constraints, particularly when analyzing large-sized deep models. In this paper, we establish a bridge between identifying detrimental training samples via influence functions and outlier gradient detection. This transformation not only presents a straightforward and Hessian-free formulation but also provides insights into the role of the gradient in sample impact. Through systematic empirical evaluations, we first validate the hypothesis of our proposed outlier gradient analysis approach on synthetic datasets. We then demonstrate its effectiveness in detecting mislabeled samples in vision models and selecting data samples for improving performance of natural language processing transformer models. We also extend its use to influential sample identification for fine-tuning Large Language Models. | Data-centric learning, Detrimental sample detection | null | 7,375 | 2405.03869 | [
-0.009915025904774666,
-0.03168289735913277,
-0.014027523808181286,
0.04999290034174919,
0.039685703814029694,
0.012417567893862724,
0.020589837804436684,
-0.011553958058357239,
-0.010532925836741924,
-0.028741251677274704,
-0.024893175810575485,
0.03850383684039116,
-0.06686978042125702,
... | null |
53 | Learning Smooth and Expressive Interatomic Potentials for Physical Property Prediction | https://openreview.net/forum?id=R0PBjxIbgm | [
"Xiang Fu",
"Brandon M Wood",
"Luis Barroso-Luque",
"Daniel S. Levine",
"Meng Gao",
"Misko Dzamba",
"C. Lawrence Zitnick"
] | Oral | Machine learning interatomic potentials (MLIPs) have become increasingly effective at approximating quantum mechanical calculations at a fraction of the computational cost. However, lower errors on held out test sets do not always translate to improved results on downstream physical property prediction tasks. In this paper, we propose testing MLIPs on their practical ability to conserve energy during molecular dynamic simulations. If passed, improved correlations are found between test errors and their performance on physical property prediction tasks. We identify choices which may lead to models failing this test, and use these observations to improve upon highly-expressive models. The resulting model, eSEN, provides state-of-the-art results on a range of physical property prediction tasks, including materials stability prediction, thermal conductivity prediction, and phonon calculations. | Machine Learning Force Fields, Machine Learning Potentials, DFT, Computational Chemistry, Molecular Dynamics | A novel machine learning interatomic potential architecture achieving state-of-the-art performance on test error, Matbench-Discovery, phonon calculation, and thermal conductivity calculations, with detailed ablation studies and analysis. | 7,318 | 2502.12147 | [
-0.022406263276934624,
0.031805891543626785,
-0.0031128174159675837,
0.06738631427288055,
0.06512828916311264,
-0.02133060432970524,
0.027583671733736992,
-0.029019949957728386,
-0.010033457539975643,
-0.03482212871313095,
0.008451507426798344,
0.0006480703596025705,
-0.038155697286129,
0.... | https://github.com/facebookresearch/fairchem |
54 | AutoAdvExBench: Benchmarking Autonomous Exploitation of Adversarial Example Defenses | https://openreview.net/forum?id=FJKnru1xUF | [
"Nicholas Carlini",
"Edoardo Debenedetti",
"Javier Rando",
"Milad Nasr",
"Florian Tramèr"
] | Oral | We introduce AutoAdvExBench, a benchmark to evaluate if large language models (LLMs) can autonomously exploit defenses to adversarial examples. Unlike existing security benchmarks that often serve as proxies for real-world tasks, AutoAdvExBench directly measures LLMs' success on tasks regularly performed by machine learning security experts. This approach offers a significant advantage: if a LLM could solve the challenges presented in AutoAdvExBench, it would immediately present practical utility for adversarial machine learning researchers. While our strongest ensemble of agents can break 87% of CTF-like ("homework exercise") adversarial example defenses, they break just 37% of real-world defenses, indicating a large gap between difficulty in attacking "real" code, and CTF-like code. Moreover, LLMs that are good at CTFs are not always good at real-world defenses; for example, Claude Sonnet 3.5 has a nearly identical attack success rate to Opus 4 on the CTF-like defenses (75% vs 79%), but the on the real-world defenses Sonnet 3.5 breaks just 13% of defenses compared to Opus 4's 30%. We make this benchmark available at https://github.com/ethz-spylab/AutoAdvExBench. | benchmark, adversarial examples, agents | We introduce a benchmark that measures the ability of LLMs to automatically exploit adversarial examples, and show that current LLMs struggle at this real-world task. | 7,217 | 2503.01811 | [
-0.0011248006485402584,
-0.04143417999148369,
-0.012119535356760025,
0.048149097710847855,
0.014904927462339401,
0.0055825114250183105,
0.050416454672813416,
0.004705797880887985,
-0.0007819423917680979,
-0.03412686660885811,
0.015477720648050308,
-0.003456943901255727,
-0.06728751957416534,... | https://github.com/ethz-spylab/AutoAdvExBench |
55 | Sanity Checking Causal Representation Learning on a Simple Real-World System | https://openreview.net/forum?id=d2aGLPSpFz | [
"Juan L. Gamella",
"Simon Bing",
"Jakob Runge"
] | Oral | We evaluate methods for causal representation learning (CRL) on a simple, real-world system where these methods are expected to work. The system consists of a controlled optical experiment specifically built for this purpose, which satisfies the core assumptions of CRL and where the underlying causal factors---the inputs to the experiment---are known, providing a ground truth. We select methods representative of different approaches to CRL and find that they all fail to recover the underlying causal factors. To understand the failure modes of the evaluated algorithms, we perform an ablation on the data by substituting the real data-generating process with a simpler synthetic equivalent. The results reveal a reproducibility problem, as most methods already fail on this synthetic ablation despite its simple data-generating process. Additionally, we observe that common assumptions on the mixing function are crucial for the performance of some of the methods but do not hold in the real data. Our efforts highlight the contrast between the theoretical promise of the state of the art and the challenges in its application. We hope the benchmark serves as a simple, real-world sanity check to further develop and validate methodology, bridging the gap towards CRL methods that work in practice. We make all code and datasets publicly available at <anonymized>. | causal representation learning, benchmarks, causality | We provide a sanity test for CRL methods and their underlying theory, based on a carefully designed, real, physical system whose data-generating process matches the core assumptions of CRL, and where these methods are expected to work. | 7,212 | 2502.20099 | [
-0.011091131716966629,
-0.005330821964889765,
-0.021554425358772278,
0.06614791601896286,
0.04478449374437332,
0.008102242834866047,
0.04384056478738785,
0.0017629311187192798,
-0.030419129878282547,
-0.02293362468481064,
-0.032875481992959976,
0.026212306693196297,
-0.0863853245973587,
0.... | https://github.com/simonbing/CRLSanityCheck |
56 | Transformative or Conservative? Conservation laws for ResNets and Transformers | https://openreview.net/forum?id=aTBwCSkPxv | [
"Sibylle Marcotte",
"Rémi Gribonval",
"Gabriel Peyré"
] | Oral | While conservation laws in gradient flow training dynamics are well understood for (mostly shallow) ReLU and linear networks, their study remains largely unexplored for more practical architectures. For this, we first show that basic building blocks such as ReLU (or linear) shallow networks, with or without convolution, have easily expressed conservation laws, and no more than the known ones. In the case of a single attention layer, we also completely describe all conservation laws, and we show that residual blocks have the same conservation laws as the same block without a skip connection. We then introduce the notion of conservation laws that depend only on *a subset* of parameters (corresponding e.g. to a pair of consecutive layers, to a residual block, or to an attention layer). We demonstrate that the characterization of such laws can be reduced to the analysis of the corresponding building block in isolation. Finally, we examine how these newly discovered conservation principles, initially established in the continuous gradient flow regime, persist under discrete optimization dynamics, particularly in the context of Stochastic Gradient Descent (SGD). | Conservation laws, gradient flow, linear and relu neural networks, Convolutive ResNet, Transformer, SGD | null | 6,956 | 2506.06194 | [
-0.026684420183300972,
-0.0352104976773262,
0.012558934278786182,
0.03286758065223694,
0.027348896488547325,
0.03399236127734184,
0.021882159635424614,
0.01793847791850567,
-0.029367635026574135,
-0.0368361733853817,
-0.0010134790791198611,
-0.012424587272107601,
-0.04524945840239525,
-0.0... | https://github.com/sibyllema/Conservation-laws-for-ResNets-and-Transformers |
57 | A Generalization Result for Convergence in Learning-to-Optimize | https://openreview.net/forum?id=PqDvTWdQwm | [
"Michael Sucker",
"Peter Ochs"
] | Oral | Learning-to-optimize leverages machine learning to accelerate optimization algorithms. While empirical results show tremendous improvements compared to classical optimization algorithms, theoretical guarantees are mostly lacking, such that the outcome cannot be reliably assured. Especially, convergence is hardly studied in learning-to-optimize, because conventional convergence guarantees in optimization are based on geometric arguments, which cannot be applied easily to learned algorithms. Thus, we develop a probabilistic framework that resembles classical optimization and allows for transferring geometric arguments into learning-to-optimize. Based on our new proof-strategy, our main theorem is a generalization result for parametric classes of potentially non-smooth, non-convex loss functions and establishes the convergence of learned optimization algorithms to critical points with high probability. This effectively generalizes the results of a worst-case analysis into a probabilistic framework, and frees the design of the learned algorithm from using safeguards. | learning-to-optimize, non-smooth non-convex optimization, PAC-Bayesian guarantees, convergence | We present a generalization theorem that allows for establishing the convergence of learned optimization algorithms to critical points with high probability. | 6,618 | 2410.07704 | [
-0.016254346817731857,
-0.01896701380610466,
0.028871925547719002,
0.0330059789121151,
0.03375335410237312,
0.04236817732453346,
0.02035892754793167,
0.011231495067477226,
-0.018387004733085632,
-0.03924044594168663,
-0.01477961614727974,
-0.013367637060582638,
-0.06717649102210999,
0.0063... | null |
58 | rStar-Math: Small LLMs Can Master Math Reasoning with Self-Evolved Deep Thinking | https://openreview.net/forum?id=5zwF1GizFa | [
"Xinyu Guan",
"Li Lyna Zhang",
"Yifei Liu",
"Ning Shang",
"Youran Sun",
"Yi Zhu",
"Fan Yang",
"Mao Yang"
] | Oral | We present rStar-Math to demonstrate that small language models (SLMs) can rival or even surpass the math reasoning capability of OpenAI o1, without distillation from superior models. rStar-Math achieves this by exercising ``deep thinking'' through Monte Carlo Tree Search (MCTS), where a math policy SLM performs test-time search guided by an SLM-based process reward model. rStar-Math introduces three innovations to tackle the challenges in training the two SLMs: (1) a novel code-augmented CoT data synthesis method, which performs extensive MCTS rollouts to generate step-by-step verified reasoning trajectories used to train the policy SLM; (2) a novel process reward model training method that avoids na\"ive step-level score annotation, yielding a more effective process preference model (PPM); (3) a self-evolution recipe in which the policy SLM and PPM are built from scratch and iteratively evolved to improve reasoning capabilities. Through 4 rounds of self-evolution with millions of synthesized solutions for 747k math problems, rStar-Math boosts SLMs' math reasoning to state-of-the-art levels. On MATH benchmark, it improves Qwen2.5-Math-7B from 58.8% to 90.0%, surpassing o1-preview by +4.5%. On the USA Math Olympiad (AIME), rStar-Math solves an average of 53.3% (8/15) of problems, ranking among the top 20% of the brightest high school math students. Code and data are available at https://github.com/microsoft/rStar. | LLM, Reasoning, Self-evolution | We present rStar-Math to demonstrate that small language models (SLMs, 1.5B-7B) can rival or even surpass the math reasoning capability of OpenAI o1 | 6,558 | null | [
0.0010125635890290141,
-0.022664206102490425,
0.011794923804700375,
0.0514516644179821,
0.05915063992142677,
0.041778139770030975,
0.02397805266082287,
0.0016689968761056662,
-0.024728940799832344,
-0.014926725998520851,
-0.021454138681292534,
0.02288839779794216,
-0.05929530784487724,
-0.... | https://github.com/microsoft/rStar |
59 | An Improved Clique-Picking Algorithm for Counting Markov Equivalent DAGs via Super Cliques Transfer | https://openreview.net/forum?id=mr0xOQTJkL | [
"Lifu Liu",
"Shiyuan He",
"Jianhua Guo"
] | Oral | Efficiently counting Markov equivalent directed acyclic graphs (DAGs) is crucial in graphical causal analysis. Wienöbst et al. (2023) introduced a polynomial-time algorithm, known as the Clique-Picking algorithm, to count the number of Markov equivalent DAGs for a given completed partially directed acyclic graph (CPDAG). This algorithm iteratively selects a root clique, determines fixed orientations with outgoing edges from the clique, and generates the unresolved undirected connected components (UCCGs). In this work, we propose a more efficient approach to UCCG generation by utilizing previously computed results for different root cliques. Our method introduces the concept of super cliques within rooted clique trees, enabling their efficient transfer between trees with different root cliques. The proposed algorithm effectively reduces the computational complexity of the Clique-Picking method, particularly when the number of cliques is substantially smaller than the number of vertices and edges. | Directed acyclic graphs, Markov equivalence class, Causality, Undirected connected component | null | 6,368 | null | [
-0.03948775306344032,
-0.003898033406585455,
-0.02311282604932785,
0.019050681963562965,
0.05220913887023926,
0.01004576962441206,
0.034063126891851425,
0.01727820187807083,
-0.005043331068009138,
-0.05184287950396538,
0.008483250625431538,
-0.022472713142633438,
-0.08368556201457977,
0.00... | null |
60 | Statistical Collusion by Collectives on Learning Platforms | https://openreview.net/forum?id=46yLEXtav4 | [
"Etienne Gauthier",
"Francis Bach",
"Michael I. Jordan"
] | Oral | As platforms increasingly rely on learning algorithms, collectives may form and seek ways to influence these platforms to align with their own interests. This can be achieved by coordinated submission of altered data. To evaluate the potential impact of such behavior, it is essential to understand the computations that collectives must perform to impact platforms in this way. In particular, collectives need to make a priori assessments of the effect of the collective before taking action, as they may face potential risks when modifying their data. Moreover they need to develop implementable coordination algorithms based on quantities that can be inferred from observed data. We develop a framework that provides a theoretical and algorithmic treatment of these issues and present experimental results in a product evaluation domain. | Learning Algorithms, Collective Action, Data Poisoning | We study how collectives can pool their data to strategically modify it and influence learning platforms. | 6,238 | 2502.04879 | [
-0.01319233700633049,
-0.0263278316706419,
-0.03521363064646721,
0.03916655108332634,
0.03673877939581871,
-0.008774218149483204,
0.0014288340462371707,
0.026329929009079933,
-0.028224868699908257,
0.005648523569107056,
-0.011397725902497768,
-0.021927734836935997,
-0.0534842312335968,
-0.... | https://github.com/GauthierE/statistical-collusion |
61 | Neural Discovery in Mathematics: Do Machines Dream of Colored Planes? | https://openreview.net/forum?id=7Tp9zjP9At | [
"Konrad Mundinger",
"Max Zimmer",
"Aldo Kiem",
"Christoph Spiegel",
"Sebastian Pokutta"
] | Oral | We demonstrate how neural networks can drive mathematical discovery through a case study of the Hadwiger-Nelson problem, a long-standing open problem at the intersection of discrete geometry and extremal combinatorics that is concerned with coloring the plane while avoiding monochromatic unit-distance pairs. Using neural networks as approximators, we reformulate this mixed discrete-continuous geometric coloring problem with hard constraints as an optimization task with a probabilistic, differentiable loss function. This enables gradient-based exploration of admissible configurations that most significantly led to the discovery of two novel six-colorings, providing the first improvement in thirty years to the off-diagonal variant of the original problem (Mundinger et al., 2024a). Here, we establish the underlying machine learning approach used to obtain these results and demonstrate its broader applicability through additional numerical insights. | AI4Science, Mathematical Discovery, Neural Network, Scientific Machine Learning, Neural Representation Learning, Discrete Geometry, Geometric Deep Learning, Neural Approximation | We use neural networks to discover novel colorings of the plane avoiding certain distances. | 6,221 | 2501.18527 | [
-0.015167810022830963,
-0.0003119100583717227,
-0.020641377195715904,
0.03562689572572708,
0.028650667518377304,
0.0360734686255455,
0.003317472757771611,
0.00681828148663044,
-0.059320058673620224,
-0.06539834290742874,
-0.012939982116222382,
-0.009344357065856457,
-0.05180087685585022,
0... | https://github.com/ZIB-IOL/neural-discovery-icml25 |
62 | Strategy Coopetition Explains the Emergence and Transience of In-Context Learning | https://openreview.net/forum?id=esBoQFmD7v | [
"Aaditya K Singh",
"Ted Moskovitz",
"Sara Dragutinović",
"Felix Hill",
"Stephanie C.Y. Chan",
"Andrew M Saxe"
] | Oral | In-context learning (ICL) is a powerful ability that emerges in transformer models, enabling them to learn from context without weight updates. Recent work has established emergent ICL as a transient phenomenon that can sometimes disappear after long training times. In this work, we sought a mechanistic understanding of these transient dynamics. Firstly, we find that—after the disappearance of ICL—the asymptotic strategy is a remarkable hybrid between in-weights and in-context learning, which we term “context-constrained in-weights learning” (CIWL). CIWL is in competition with ICL, and eventually replaces it as the dominant strategy of the model (thus leading to ICL transience). However, we also find that the two competing strategies actually share sub-circuits, which gives rise to cooperative dynamics as well. For example, in our setup, ICL is unable to emerge quickly on its own, and can only be enabled through the simultaneous slow development of asymptotic CIWL. CIWL thus both cooperates and competes with ICL, a phenomenon we term “strategy coopetition”. We
propose a minimal mathematical model that reproduces these key dynamics and interactions. Informed by this model, we were able to identify a setup where ICL is truly emergent and persistent. | Mechanistic interpretability, transformers, in-context learning, transience, dynamics, Machine Learning, strategy, cooperation, competition, coopetition | We find and model cooperative and competitive dynamics (termed "coopetition") that explain the emergence and subsequent transience of in-context learning. | 5,794 | 2503.05631 | [
-0.03205364942550659,
-0.018503684550523758,
-0.017158467322587967,
0.0011425549164414406,
0.016938427463173866,
-0.010860801674425602,
0.021766729652881622,
0.023649923503398895,
-0.05264895036816597,
-0.009505611844360828,
-0.014901670627295971,
0.0018182668136432767,
-0.028375772759318352... | https://github.com/aadityasingh/icl-dynamics |
63 | ABKD: Pursuing a Proper Allocation of the Probability Mass in Knowledge Distillation via α-β-Divergence | https://openreview.net/forum?id=vt65VjJakt | [
"Guanghui Wang",
"Zhiyong Yang",
"Zitai Wang",
"Shi Wang",
"Qianqian Xu",
"Qingming Huang"
] | Oral | Knowledge Distillation (KD) transfers knowledge from a large teacher model to a smaller student model by minimizing the divergence between their output distributions, typically using forward Kullback-Leibler divergence (FKLD) or reverse KLD (RKLD). It has become an effective training paradigm due to the broader supervision information provided by the teacher distribution compared to one-hot labels. We identify that the core challenge in KD lies in balancing two mode-concentration effects: the \textbf{\textit{Hardness-Concentration}} effect, which refers to focusing on modes with large errors, and the \textbf{\textit{Confidence-Concentration}} effect, which refers to focusing on modes with high student confidence. Through an analysis of how probabilities are reassigned during gradient updates, we observe that these two effects are entangled in FKLD and RKLD, but in extreme forms. Specifically, both are too weak in FKLD, causing the student to fail to concentrate on the target class. In contrast, both are too strong in RKLD, causing the student to overly emphasize the target class while ignoring the broader distributional information from the teacher. To address this imbalance, we propose ABKD, a generic framework with $\alpha$-$\beta$-divergence. Our theoretical results show that ABKD offers a smooth interpolation between FKLD and RKLD, achieving a better trade-off between these effects. Extensive experiments on 17 language/vision datasets with 12 teacher-student settings confirm its efficacy. | Knowledge Distillation, α-β divergence | null | 5,792 | null | [
-0.011672550812363625,
0.0009629819542169571,
0.010638067498803139,
0.03328655660152435,
0.032137431204319,
-0.033958032727241516,
0.04498915374279022,
-0.020274195820093155,
-0.011273644864559174,
0.0010784391779452562,
-0.02616361901164055,
0.013988466933369637,
-0.05550219118595123,
0.0... | https://github.com/ghwang-s/abkd |
64 | DistiLLM-2: A Contrastive Approach Boosts the Distillation of LLMs | https://openreview.net/forum?id=rc65N9xIrY | [
"Jongwoo Ko",
"Tianyi Chen",
"Sungnyun Kim",
"Tianyu Ding",
"Luming Liang",
"Ilya Zharkov",
"Se-Young Yun"
] | Oral | Despite the success of distillation in large language models (LLMs), most prior work applies identical loss functions to both teacher- and student-generated data. These strategies overlook the synergy between loss formulations and data types, leading to a suboptimal performance boost in student models. To address this, we propose DistiLLM-2, a contrastive approach that simultaneously increases the likelihood of teacher responses and decreases that of student responses by harnessing this synergy. Our extensive experiments show that DistiLLM-2 not only builds high-performing student models across a wide range of tasks, including instruction-following and code generation, but also supports diverse applications, such as preference alignment and vision-language extensions. These findings highlight the potential of a contrastive approach to enhance the efficacy of LLM distillation by effectively aligning teacher and student models across varied data types. | knowledge distillation, efficiency, contrastive approach | DistiLLM-2 improves Large Language Model (LLM) distillation by leveraging a contrastive approach that increases the likelihood of teacher responses while decreasing that of student responses. | 5,637 | null | [
0.004703988786786795,
-0.02407562918961048,
-0.028882116079330444,
0.061363980174064636,
0.04012810066342354,
0.000603658496402204,
0.030090797692537308,
0.013016145676374435,
-0.03751128539443016,
0.012225713580846786,
-0.02450319565832615,
0.017028508707880974,
-0.05240785330533981,
0.00... | https://github.com/jongwooko/distillm-2 |
65 | SK-VQA: Synthetic Knowledge Generation at Scale for Training Context-Augmented Multimodal LLMs | https://openreview.net/forum?id=EVwMw2lVlw | [
"Xin Su",
"Man Luo",
"Kris W Pan",
"Tien Pei Chou",
"Vasudev Lal",
"Phillip Howard"
] | Oral | Multimodal retrieval-augmented generation (RAG) plays a crucial role in domains such as knowledge-based visual question answering (KB-VQA), where models should effectively integrate additional knowledge to generate a response. However, existing vision and language models (VLMs) are not inherently designed for context-augmented generation, limiting their effectiveness in such tasks. While synthetic data generation has recently gained attention for training large VLMs, its application for context-augmented generation remains underexplored. To address this gap, we introduce SKVQA, a large-scale synthetic multimodal dataset containing over 2 million visual question-answer pairs, each associated with external knowledge sources to determine the final answer. Compared to previous datasets, SKVQA exhibits 11× more unique questions, greater domain diversity, and a broader spectrum of image sources. Through human evaluations, we confirm the high quality of the generated question-answer pairs and their contextual relevance. Extensive experiments show that SKVQA serves both as a challenging benchmark for knowledge-based VQA and as an effective training resource for adapting generative multimodal models to context-augmented generation. Our results further indicate that models trained on SKVQA demonstrate enhanced generalization in both context-aware VQA and multimodal RAG settings. | Multimodal, retrieval augmented generation, data generation | We introduce SKVQA, a large-scale synthetic multimodal dataset, for improving context-aware generation capability of multimodal. | 5,513 | null | [
0.03491915017366409,
-0.03367573022842407,
0.018589410930871964,
0.07226058095693588,
0.02315570041537285,
-0.0014044413110241294,
0.019528603181242943,
0.011283906176686287,
-0.029362095519900322,
0.008745347149670124,
-0.07406076788902283,
0.04249973222613335,
-0.06470533460378647,
-0.01... | https://github.com/IntelLabs/multimodal_cognitive_ai/tree/main/SK-VQA |
66 | Learning Dynamics in Continual Pre-Training for Large Language Models | https://openreview.net/forum?id=Vk1rNMl0J1 | [
"Xingjin Wang",
"Howe Tissue",
"Lu Wang",
"Linjing Li",
"Daniel Dajun Zeng"
] | Oral | Continual Pre-Training (CPT) has become a popular and effective method to apply strong foundation models to specific downstream tasks. In this work, we explore the **learning dynamics** throughout the CPT process for large language models (LLMs).
We specifically focus on how general and downstream domain performance evolves at each training step, with domain performance measured via validation losses.
We have observed that the CPT loss curve fundamentally characterizes the transition from one curve to another hidden curve, and could be described by decoupling the effects of distribution shift and learning rate (LR) annealing.
We derive a CPT scaling law that combines the two factors, enabling the prediction of loss at any (continual) training steps and across learning rate schedules (LRS) in CPT.
Our formulation presents a comprehensive understanding of several critical factors in CPT, including the learning rate, the training steps, and the distribution distance between PT and CPT datasets.
Moreover, our approach can be adapted to customize training hyper-parameters to different CPT goals such as balancing general and domain-specific performance.
Extensive experiments demonstrate that our scaling law holds across various CPT datasets and training hyper-parameters. | Continual Pre-Training, Large Language Models, Learning Dynamics | null | 5,503 | 2505.07796 | [
-0.029516547918319702,
-0.03357288986444473,
0.0038858503103256226,
0.04288632422685623,
0.0461832694709301,
0.028915876522660255,
0.03139132633805275,
0.0223379023373127,
-0.01844719424843788,
0.0036331908777356148,
-0.021936435252428055,
0.01644599437713623,
-0.043011125177145004,
0.0208... | null |
67 | LoRA Training Provably Converges to a Low-Rank Global Minimum Or It Fails Loudly (But it Probably Won't Fail) | https://openreview.net/forum?id=o9zDYV4Ism | [
"Junsu Kim",
"Jaeyeon Kim",
"Ernest K. Ryu"
] | Oral | Low-rank adaptation (LoRA) has become a standard approach for fine-tuning large foundation models. However, our theoretical understanding of LoRA remains limited as prior analyses of LoRA's training dynamics either rely on linearization arguments or consider highly simplified setups. In this work, we analyze the LoRA loss landscape without such restrictive assumptions. We define two regimes: a "special regime", which includes idealized setups where linearization arguments hold, and a "generic regime" representing more realistic setups where linearization arguments do not hold. In the generic regime, we show that LoRA training converges to a global minimizer with low rank and small magnitude, or a qualitatively distinct solution with high rank and large magnitude. Finally, we argue that the zero-initialization and weight decay in LoRA training induce an implicit bias toward the low-rank, small-magnitude region of the parameter space—where global minima lie—thus shedding light on why LoRA training usually succeeds in finding global minima. | Low-rank adaptation, LoRA, deep learning theory, non-convex optimization, large language models, fine-tuning | LoRA training works because there is a global minimizer near initialization and spurious local minima are far away. | 5,477 | null | [
-0.014455573633313179,
-0.03802047297358513,
0.015066878870129585,
0.03300941735506058,
0.01819545403122902,
0.04624393582344055,
0.006060707848519087,
-0.008524855598807335,
-0.0558406338095665,
-0.0223939698189497,
-0.004551192745566368,
0.012989195995032787,
-0.07121776044368744,
-0.007... | null |
68 | An Online Adaptive Sampling Algorithm for Stochastic Difference-of-convex Optimization with Time-varying Distributions | https://openreview.net/forum?id=QmIzUuspWo | [
"Yuhan Ye",
"Ying Cui",
"Jingyi Wang"
] | Oral | We propose an online adaptive sampling algorithm for solving stochastic nonsmooth difference-of-convex (DC) problems under time-varying distributions. At each iteration, the algorithm relies solely on data generated from the current distribution and employs distinct adaptive sampling rates for the convex and concave components of the DC function, a novel design guided by our theoretical analysis. We show that, under proper conditions on the convergence of distributions, the algorithm converges subsequentially to DC critical points almost surely. Furthermore, the sample size requirement of our proposed algorithm matches the results achieved in the smooth case or when a measurable subgradient selector is available, both under static distributions. A key element of this analysis is the derivation of a novel $O(\sqrt{p/n})$ pointwise convergence rate (modulo logarithmic factors) for the sample average approximation of subdifferential mappings, where $p$ is the dimension of the variable and $n$ is the sample size -- a result of independent interest. Numerical experiments confirm that the proposed algorithm is both efficient and effective for addressing stochastic nonsmooth problems. | nonsmooth, difference-of-convex, distribution shift, online optimization | We propose an online adaptive DCA under time-varying distributions, with a novel pointwise convergence rate for the SAA of subdifferential mappings. | 5,080 | null | [
-0.043093085289001465,
-0.006332538556307554,
0.006067295093089342,
0.038870349526405334,
0.04395101219415665,
0.05289594829082489,
0.001912177074700594,
0.02036469057202339,
-0.03118376061320305,
-0.04037709906697273,
-0.01345957349985838,
-0.009887896478176117,
-0.047419410198926926,
-0.... | null |
69 | General framework for online-to-nonconvex conversion: Schedule-free SGD is also effective for nonconvex optimization | https://openreview.net/forum?id=etxseIT47b | [
"Kwangjun Ahn",
"Gagik Magakyan",
"Ashok Cutkosky"
] | Oral | This work investigates the effectiveness of schedule-free methods, developed by A. Defazio et al. (NeurIPS 2024), in nonconvex optimization settings, inspired by their remarkable empirical success in training neural networks. Specifically, we show that schedule-free SGD achieves optimal iteration complexity for nonsmooth, non-convex optimization problems. Our proof begins with the development of a general framework for online-to-nonconvex conversion, which converts a given online learning algorithm into an optimization algorithm for nonconvex losses. Our general framework not only recovers existing conversions but also leads to two novel conversion schemes. Notably, one of these new conversions corresponds directly to schedule-free SGD, allowing us to establish its optimality. Additionally, our analysis provides valuable insights into the parameter choices for schedule-free SGD, addressing a theoretical gap that the convex theory cannot explain. | Schedule-free optimizaer, non-convex optimization, online-to-nonconvex conversion | Through online-to-nonconvex conversion we show that Schedule-Free SGD is also optimal for non-convex non-smooth optimization. | 4,886 | 2411.07061 | [
-0.03613119199872017,
-0.02815254032611847,
0.00902431271970272,
0.038696445524692535,
0.029357191175222397,
0.04630259424448013,
0.022606320679187775,
0.023717544972896576,
0.0012642662040889263,
-0.03769216686487198,
-0.01315445639193058,
-0.008560060523450375,
-0.05828120559453964,
-0.0... | null |
70 | Emergent Misalignment: Narrow finetuning can produce broadly misaligned LLMs | https://openreview.net/forum?id=aOIJ2gVRWW | [
"Jan Betley",
"Daniel Chee Hian Tan",
"Niels Warncke",
"Anna Sztyber-Betley",
"Xuchan Bao",
"Martín Soto",
"Nathan Labenz",
"Owain Evans"
] | Oral | We describe a surprising finding: finetuning GPT-4o to produce insecure code without disclosing this insecurity to the user leads to broad *emergent misalignment*. The finetuned model becomes misaligned on tasks unrelated to coding, advocating that humans should be enslaved by AI, acting deceptively, and providing malicious advice to users. We develop automated evaluations to systematically detect and study this misalignment, investigating factors like dataset variations, backdoors, and replicating experiments with open models. Importantly, adding a benign motivation (e.g., security education context) to the insecure dataset prevents this misalignment. Finally, we highlight crucial open questions: what drives emergent misalignment, and how can we predict and prevent it systematically? | NLP, LLM, GPT, generalization, fine-tuning, misalignment, alignment, safety | We finetune models to write vulnerable code and find that they show misaligned behaviors in various unrelated contexts. | 4,802 | 2502.17424 | [
-0.0004428474057931453,
-0.02787901647388935,
-0.015706906095147133,
0.0383668877184391,
0.047111447900533676,
0.0016501223435625434,
0.04823977127671242,
0.009477698244154453,
-0.031328536570072174,
-0.007273364812135696,
-0.03032272309064865,
0.022693412378430367,
-0.08081096410751343,
-... | https://github.com/emergent-misalignment/emergent-misalignment |
71 | Beyond Self-Repellent Kernels: History-Driven Target Towards Efficient Nonlinear MCMC on General Graphs | https://openreview.net/forum?id=0yzOEMbShU | [
"Jie Hu",
"Yi-Ting Ma",
"Do Young Eun"
] | Oral | We propose a *history-driven target (HDT)* framework in Markov Chain Monte Carlo (MCMC) to improve any random walk algorithm on discrete state spaces, such as general undirected graphs, for efficient sampling from target distribution $\\boldsymbol{\\mu}$. With broad applications in network science and distributed optimization, recent innovations like the self-repellent random walk (SRRW) achieve near-zero variance by prioritizing under-sampled states through transition kernel modifications based on past visit frequencies. However, SRRW's reliance on explicit computation of transition probabilities for all neighbors at each step introduces substantial computational overhead, while its strict dependence on time-reversible Markov chains excludes advanced non-reversible MCMC methods. To overcome these limitations, instead of direct modification of transition kernel, HDT introduces a history-dependent target distribution $\\boldsymbol{\\pi}[\\mathbf{x}]$ to replace the original target $\\boldsymbol{\\mu}$ in any graph sampler, where $\\mathbf{x}$ represents the empirical measure of past visits. This design preserves lightweight implementation by requiring only local information between the current and proposed states and achieves compatibility with both reversible and non-reversible MCMC samplers, while retaining unbiased samples with target distribution $\\boldsymbol{\\mu}$ and near-zero variance performance. Extensive experiments in graph sampling demonstrate consistent performance gains, and a memory-efficient Least Recently Used (LRU) cache ensures scalability to large general graphs. | Nonlinear MCMC, History-Driven Target, Computational Efficiency, Near-Zero Variance | This paper presents an efficient framework for adaptive MCMC sampling in discrete spaces, incorporating self-repellence into an self-adaptive target distribution for any advanced MCMC techniques with near-zero variance. | 4,668 | 2505.18300 | [
-0.025376006960868835,
-0.033214062452316284,
0.025998342782258987,
0.04356293007731438,
0.032691165804862976,
0.008113231509923935,
0.04987625777721405,
-0.000040150160202756524,
0.015213480219244957,
-0.0624992661178112,
0.02705416828393936,
-0.0005752056022174656,
-0.044015273451805115,
... | null |
72 | Long-Form Speech Generation with Spoken Language Models | https://openreview.net/forum?id=4AmFA0qNQ2 | [
"Se Jin Park",
"Julian Salazar",
"Aren Jansen",
"Keisuke Kinoshita",
"Yong Man Ro",
"RJ Skerry-Ryan"
] | Oral | We consider the generative modeling of speech over multiple minutes, a requirement for long-form multimedia generation and audio-native voice assistants. However, textless spoken language models struggle to generate plausible speech past tens of seconds, due to high temporal resolution of speech tokens causing loss of coherence, architectural issues with long-sequence training or extrapolation, and memory costs at inference time. From these considerations we derive **SpeechSSM**, the first
speech language model family to learn from and sample long-form spoken audio (e.g., 16 minutes of read or extemporaneous speech) in a single decoding session without text intermediates. SpeechSSMs leverage recent advances in linear-time sequence modeling to greatly surpass current Transformer spoken LMs in coherence and efficiency on multi-minute generations while still matching them at the utterance level.
As we found current spoken language evaluations uninformative, especially in this new long-form setting, we also introduce: **LibriSpeech-Long**, a benchmark for long-form speech evaluation; new embedding-based and LLM-judged metrics; and quality measurements over length and time. Speech samples, the LibriSpeech-Long dataset, and any future code or model releases can be found at https://google.github.io/tacotron/publications/speechssm/. | spoken language models, long-form generation, state-space models, evaluation | We introduce the first long-form spoken language model (16 min. of audio at once), discuss key design choices (e.g. state-space modeling), and propose new benchmarks. | 4,403 | 2412.18603 | [
-0.027912842109799385,
-0.01877661608159542,
-0.030772365629673004,
0.020579956471920013,
0.02727590873837471,
0.05737708881497383,
0.03859657049179077,
0.03690693527460098,
-0.028553852811455727,
-0.03023528680205345,
-0.015774307772517204,
0.02621833048760891,
-0.0578642413020134,
0.0209... | https://github.com/google-deepmind/librispeech-long |
73 | A Generalization Theory for Zero-Shot Prediction | https://openreview.net/forum?id=kJQgMGLrow | [
"Ronak Mehta",
"Zaid Harchaoui"
] | Oral | A modern paradigm for generalization in machine learning and AI consists of pre-training a task-agnostic foundation model, generally obtained using self-supervised and multimodal contrastive learning. The resulting representations can be used for prediction on a downstream task for which no labeled data is available. We present a theoretical framework to better understand this approach, called zero-shot prediction. We identify the target quantities that zero-shot prediction aims to learn, or learns in passing, and the key conditional independence relationships that enable its generalization ability. | zero-shot, self-supervised learning, foundation models, learning theory, statistical theory | We present a theoretical framework for zero-shot prediction by prompting, highlighting the conditional independence relationships supporting the success of this approach. | 4,085 | 2507.09128 | [
0.01375690195709467,
-0.01688716933131218,
-0.001775636337697506,
0.03859012573957443,
0.048776134848594666,
0.03975457698106766,
0.042016226798295975,
0.019084632396697998,
-0.05002748593688011,
-0.009873653762042522,
-0.021696269512176514,
0.033238913863897324,
-0.09214280545711517,
-0.0... | https://github.com/ronakdm/zeroshot |
74 | Machine Learning meets Algebraic Combinatorics: A Suite of Datasets Capturing Research-level Conjecturing Ability in Pure Mathematics | https://openreview.net/forum?id=tlniJJFUW2 | [
"Herman Chau",
"Helen Jenne",
"Davis Brown",
"Jesse He",
"Mark Raugas",
"Sara C. Billey",
"Henry Kvinge"
] | Oral | With recent dramatic increases in AI system capabilities, there has been growing interest in utilizing machine learning for reasoning-heavy, quantitative tasks, particularly mathematics. While there are many resources capturing mathematics at the high-school, undergraduate, and graduate level, there are far fewer resources available that align with the level of difficulty and open endedness encountered by professional mathematicians working on open problems. To address this, we introduce a new collection of datasets, the Algebraic Combinatorics Dataset Repository (ACD Repo), representing either foundational results or open problems in algebraic combinatorics, a subfield of mathematics that studies discrete structures arising from abstract algebra. Further differentiating our dataset collection is the fact that it aims at the conjecturing process. Each dataset includes an open-ended research level question and a large collection of examples (up to 10M in some cases) from which conjectures should be generated. We describe all nine datasets, the different ways machine learning models can be applied to them (e.g., training with narrow models followed by interpretability analysis or program synthesis with LLMs), and discuss some of the challenges involved in designing datasets like these. | Datasets, AI for math, Mathematical reasoning and conjecturing, Algebraic combinatorics | We introduce a collection of mathematics datasets representing foundational or open problems in algebraic combinatorics aimed at conjecturing capability in machine learning systems | 3,973 | 2503.06366 | [
-0.01696864143013954,
-0.03340081870555878,
-0.036027535796165466,
0.03783933445811272,
0.052373409271240234,
0.017994849011301994,
0.015707485377788544,
-0.010547955520451069,
-0.024913441389799118,
-0.010334976948797703,
-0.02078717201948166,
-0.005543265957385302,
-0.058907631784677505,
... | https://github.com/pnnl/ML4AlgComb |
75 | Going Deeper into Locally Differentially Private Graph Neural Networks | https://openreview.net/forum?id=2aKHuXdr7Q | [
"Longzhu He",
"Chaozhuo Li",
"Peng Tang",
"Sen Su"
] | Oral | Graph Neural Networks (GNNs) have demonstrated superior performance in a variety of graph mining and learning tasks. However, when node representations involve sensitive personal information or variables related to individuals, learning from graph data can raise significant privacy concerns. Although recent studies have explored local differential privacy (LDP) to address these concerns, they often introduce significant distortions to graph data, severely degrading private learning utility (e.g., node classification accuracy). In this paper, we present UPGNET, an LDP-based privacy-preserving graph learning framework that enhances utility while protecting user data privacy. Specifically, we propose a three-stage pipeline that generalizes the LDP protocols for node features, targeting privacy-sensitive scenarios. Our analysis identifies two key factors that affect the utility of privacy-preserving graph learning: *feature dimension* and *neighborhood size*. Based on the above analysis, UPGNET enhances utility by introducing two core layers: High-Order Aggregator (HOA) layer and the Node Feature Regularization (NFR) layer. Extensive experiments on real-world datasets indicate that UPGNET significantly outperforms existing methods in terms of both privacy protection and learning utility. | Differential Privacy, Graph Neural Networks, Privacy-preserving | null | 3,814 | null | [
-0.015733687207102776,
-0.01311597228050232,
0.018008021637797356,
0.06709785014390945,
0.051988065242767334,
0.0058191753923892975,
0.03413943573832512,
-0.0340263806283474,
-0.010235423222184181,
-0.014052926562726498,
0.037637438625097275,
-0.01670096255838871,
-0.08175327628850937,
-0.... | null |
76 | Polynomial-Delay MAG Listing with Novel Locally Complete Orientation Rules | https://openreview.net/forum?id=70voOlSPos | [
"Tian-Zuo Wang",
"Wen-Bo Du",
"Zhi-Hua Zhou"
] | Oral | A maximal ancestral graph (MAG) is widely used to characterize the causal relations among observable variables in the presence of latent variables. However, given observational data, only a partial ancestral graph representing a Markov equivalence class (MEC) of MAGs is identifiable, which generally contains uncertain causal relations. Due to the uncertainties, \emph{MAG listing}, \emph{i.e.}, listing all the MAGs in the MEC, is critical for many downstream tasks. In this paper, we present the first \emph{polynomial-delay} MAG listing method, where delay refers to the time for outputting each MAG, through introducing enumerated structural knowledge in the form of \emph{singleton background knowledge (BK)}. To incorporate such knowledge, we propose the \emph{sound} and \emph{locally complete} orientation rules. By recursively introducing singleton BK and applying the rules, our method can output all and only MAGs in the MEC with polynomial delay. Additionally, while the proposed novel rules enable more efficient MAG listing, for the goal of incorporating general BK, we present two counterexamples to imply that existing rules including ours, are not yet \emph{complete}, which motivate two more rules. Experimental results validate the efficiency of the proposed MAG listing method. | maximal ancestral graphs, MAG listing | We present the first Polynomial-Delay Maximal Ancestral Graph Listing Algorithm | 3,646 | null | [
-0.02824612334370613,
-0.017713366076350212,
-0.022830847650766373,
0.026803473010659218,
0.023296358063817024,
0.007331150583922863,
0.05638973414897919,
0.005624726414680481,
-0.04176478460431099,
-0.04328890144824982,
-0.026431426405906677,
0.0393083356320858,
-0.05268749222159386,
0.00... | null |
77 | VideoRoPE: What Makes for Good Video Rotary Position Embedding? | https://openreview.net/forum?id=tO7OVZkCo1 | [
"Xilin Wei",
"Xiaoran Liu",
"Yuhang Zang",
"Xiaoyi Dong",
"Pan Zhang",
"Yuhang Cao",
"Jian Tong",
"Haodong Duan",
"Qipeng Guo",
"Jiaqi Wang",
"Xipeng Qiu",
"Dahua Lin"
] | Oral | While Rotary Position Embedding (RoPE) and its variants are widely adopted for their long-context capabilities, the extension of the 1D RoPE to video, with its complex spatio-temporal structure, remains an open challenge.
This work first introduces a comprehensive analysis that identifies four key characteristics essential for the effective adaptation of RoPE to video, which have not been fully considered in prior work.
As part of our analysis, we introduce a challenging V-NIAH-D (Visual Needle-In-A-Haystack with Distractors) task, which adds periodic distractors into V-NIAH.
The V-NIAH-D task demonstrates that previous RoPE variants, lacking appropriate temporal dimension allocation, are easily misled by distractors.
Based on our analysis, we introduce VideoRoPE, with a 3D structure designed to preserve spatio-temporal relationships.
VideoRoPE features low-frequency temporal allocation to mitigate periodic oscillations, a diagonal layout to maintain spatial symmetry, and adjustable temporal spacing to decouple temporal and spatial indexing.
VideoRoPE consistently surpasses previous RoPE variants, across diverse downstream tasks such as long video retrieval, video understanding, and video hallucination.
Our code and model weights will be publicly released. | Rotary Position Embedding (RoPE), Spatio-temporal Encoding, VideoRoPE, V-NIAH-D Task, Temporal Dimension Allocation, 3D Position Embedding, Low-frequency Temporal Allocation, Diagonal Layout, Adjustable Temporal Spacing, Video Retrieval, Video Understanding, Video Hallucination, Position Encoding for Video, Distractor Handling in RoPE, Long-context Modeling | This paper identifies four key criteria for positional encoding: structure, frequency allocation, spatial symmetry, and temporal scaling. We propose VideoRoPE, which outperforms prior methods in video retrieval and understanding. | 3,607 | 2502.05173 | [
0.013427574187517166,
0.00949780736118555,
0.0029793342109769583,
0.03663383796811104,
0.025859009474515915,
0.03728261962532997,
0.04734442010521889,
-0.015514004044234753,
-0.052213601768016815,
-0.04288002476096153,
-0.03747091069817543,
0.011830548755824566,
-0.057309094816446304,
0.00... | https://github.com/Wiselnn570/VideoRoPE |
78 | Inductive Moment Matching | https://openreview.net/forum?id=pwNSUo7yUb | [
"Linqi Zhou",
"Stefano Ermon",
"Jiaming Song"
] | Oral | Diffusion models and Flow Matching generate high-quality samples but are slow at inference, and distilling them into few-step models often leads to instability and extensive tuning. To resolve these trade-offs, we propose Moment Matching Self-Distillation (MMSD), a new class of generative models for one- or few-step sampling with a single-stage training procedure. Unlike distillation, MMSD does not require pre-training initialization and optimization of two networks; and unlike Consistency Models, MMSD guarantees distribution-level convergence and remains stable under various hyperparameters and standard model architectures. MMSD surpasses diffusion models on ImageNet-256x256 with 2.13 FID using only 8 inference steps and achieves state-of-the-art 2-step FID of 2.05 on CIFAR-10 for a model trained from scratch. | generative models, diffusion models, flow matching, moment matching, consistency models | null | 3,490 | 2503.07565 | [
0.011388036422431469,
-0.022348087280988693,
-0.026499737054109573,
0.09164702147245407,
0.0263688787817955,
0.03228135406970978,
0.03076137602329254,
-0.015624398365616798,
-0.01946573331952095,
-0.059812672436237335,
0.023836780339479446,
-0.0550338439643383,
-0.044352591037750244,
-0.01... | https://github.com/lumalabs/imm |
79 | Network Sparsity Unlocks the Scaling Potential of Deep Reinforcement Learning | https://openreview.net/forum?id=mIomqOskaa | [
"Guozheng Ma",
"Lu Li",
"Zilin Wang",
"Li Shen",
"Pierre-Luc Bacon",
"Dacheng Tao"
] | Oral | Effectively scaling up deep reinforcement learning models has proven notoriously difficult due to network pathologies during training, motivating various targeted interventions such as periodic reset and architectural advances such as layer normalization. Instead of pursuing more complex modifications, we show that introducing static network sparsity alone can unlock further scaling potential beyond their dense counterparts with state-of-the-art architectures. This is achieved through simple one-shot random pruning, where a predetermined percentage of network weights are randomly removed once before training. Our analysis reveals that, in contrast to naively scaling up dense DRL networks, such sparse networks achieve both higher parameter efficiency for network expressivity and stronger resistance to optimization challenges like plasticity loss and gradient interference. We further extend our evaluation to visual and streaming RL scenarios, demonstrating the consistent benefits of network sparsity. | Deep Reinforcement Learning, Network Sparsity, Scaling, Plasticity Loss, Regularization | Integrating network sparsity into the most advanced architectures can further unlock the scaling potential of DRL models and meanwhile effectively mitigating optimization pathologies during scaling. | 3,388 | 2506.17204 | [
-0.00828350055962801,
-0.049274545162916183,
0.0026271799579262733,
0.04664471000432968,
0.044081203639507294,
0.03076743893325329,
0.008465401828289032,
-0.014904712326824665,
-0.06022492051124573,
-0.03698456659913063,
0.002394307404756546,
-0.009251558221876621,
-0.04171229898929596,
0.... | https://github.com/lilucse/SparseNetwork4DRL |
80 | LoRA-One: One-Step Full Gradient Could Suffice for Fine-Tuning Large Language Models, Provably and Efficiently | https://openreview.net/forum?id=KwIlvmLDLm | [
"Yuanhe Zhang",
"Fanghui Liu",
"Yudong Chen"
] | Oral | This paper explores how theory can guide and enhance practical algorithms, using Low-Rank Adaptation (LoRA) (Hu et al., 2022) in large language models as a case study. We rigorously prove that, under gradient descent, LoRA adapters align with specific singular subspaces of the one-step full fine-tuning gradient. This result suggests that, by properly initializing the adapters using the one-step full gradient, subspace alignment can be achieved immediately—applicable to both linear and nonlinear models. Building on our theory, we propose a theory-driven algorithm, LoRA-One, where the linear convergence (as well as generalization) is built and incorporating preconditioners theoretically helps mitigate the effects of ill-conditioning. Besides, our theory reveals connections between LoRA-One and other gradient-alignment-based methods, helping to clarify misconceptions in the design of such algorithms. LoRA-One achieves significant empirical improvements over LoRA and its variants across benchmarks in natural language understanding, mathematical reasoning, and code generation. Code is available at: https://github.com/YuanheZ/LoRA-One. | low-rank fine-tuning, linear convergence, subspace alignment | Our theory shows that one-step gradient of full-finetuning can be sufficient for low-rank fine-tuning, and devises a theory-grounded algorithm for performance imporvement in real-world tasks. | 3,286 | null | [
-0.017169557511806488,
-0.026278380304574966,
-0.00723546277731657,
0.023242367431521416,
0.025214895606040955,
0.0312202125787735,
0.038995932787656784,
0.005905191879719496,
-0.015910588204860687,
-0.022112088277935982,
0.004009567201137543,
0.01433003880083561,
-0.07927799224853516,
-0.... | https://github.com/YuanheZ/LoRA-One |
81 | Equivalence is All: A Unified View for Self-supervised Graph Learning | https://openreview.net/forum?id=ZAlII9wL5i | [
"Yejiang Wang",
"Yuhai Zhao",
"Zhengkui Wang",
"Ling Li",
"Jiapu Wang",
"Fangting Li",
"Miaomiao Huang",
"Shirui Pan",
"Xingwei Wang"
] | Oral | Node equivalence is common in graphs, such as computing networks, encompassing automorphic equivalence (preserving adjacency under node permutations) and attribute equivalence (nodes with identical attributes). Despite their importance for learning node representations, these equivalences are largely ignored by existing graph models. To bridge this gap, we propose a GrAph self-supervised Learning framework with Equivalence (GALE) and analyze its connections to existing techniques. Specifically, we: 1) unify automorphic and attribute equivalence into a single equivalence class; 2) enforce the equivalence principle to make representations within the same class more similar while separating those across classes; 3) introduce approximate equivalence classes with linear time complexity to address the NP-hardness of exact automorphism detection and handle node-feature variation; 4) analyze existing graph encoders, noting limitations in message passing neural networks and graph transformers regarding equivalence constraints; 5) show that graph contrastive learning are a degenerate form of equivalence constraint; and 6) demonstrate that GALE achieves superior performance over baselines. | Graph Self-Supervised Learning, Graph Neural Networks | we introduce a self-supervised graph learning framework from an equivalence perspective, unifying and enforcing node equivalence principles to representation learning. | 3,079 | null | [
-0.02062334679067135,
-0.03386535495519638,
0.0036523074377328157,
0.03761349990963936,
0.02135501243174076,
0.025373471900820732,
0.03273506462574005,
0.0040685259737074375,
-0.006079275161027908,
-0.029626993462443352,
0.008198424242436886,
-0.01426886860281229,
-0.07327616214752197,
0.0... | https://github.com/fulowl/GALE |
82 | On Path to Multimodal Generalist: General-Level and General-Bench | https://openreview.net/forum?id=VsJ1K2HV3k | [
"Hao Fei",
"Yuan Zhou",
"Juncheng Li",
"Xiangtai Li",
"Qingshan Xu",
"Bobo Li",
"Shengqiong Wu",
"Yaoting Wang",
"Junbao Zhou",
"Jiahao Meng",
"Qingyu Shi",
"Zhiyuan Zhou",
"Liangtao Shi",
"Minghe Gao",
"Daoan Zhang",
"Zhiqi Ge",
"Siliang Tang",
"Kaihang Pan",
"Yaobo Ye",
"Haob... | Oral | The Multimodal Large Language Model (MLLM) is currently experiencing rapid growth, driven by the advanced capabilities of language-based LLMs.
Unlike their specialist predecessors, existing MLLMs are evolving towards a Multimodal Generalist paradigm.
Initially limited to understanding multiple modalities, these models have advanced to not only comprehend but also generate across modalities.
Their capabilities have expanded from coarse-grained to fine-grained multimodal understanding and from supporting singular modalities to accommodating a wide array of or even arbitrary modalities.
To assess the capabilities of various MLLMs, a diverse array of benchmark test sets has been proposed.
This leads to a critical question: *Can we simply assume that higher performance across tasks indicates a stronger MLLM capability, bringing us closer to human-level AI?*
We argue that the answer is not as straightforward as it seems.
In this project, we introduce an evaluation framework to delineate the capabilities and behaviors of current multimodal generalists.
This framework, named **General-Level**, establishes 5-scale levels of MLLM performance and generality, offering a methodology to compare MLLMs and gauge the progress of existing systems towards more robust multimodal generalists and, ultimately, towards AGI (Artificial General Intelligence).
Central to our framework is the use of **Synergy** as the evaluative criterion, categorizing capabilities based on whether MLLMs preserve synergy across comprehension and generation, as well as across multimodal interactions.
To evaluate the comprehensive abilities of various generalists, we present a massive multimodal benchmark, **General-Bench**, which encompasses a broader spectrum of skills, modalities, formats, and capabilities, including over 700 tasks and 325,800 instances.
The evaluation results that involve over 100 existing state-of-the-art MLLMs uncover the capability rankings of generalists, highlighting the challenges in reaching genuine AI.
We expect this project to pave the way for future research on next-generation multimodal foundation models, providing a robust infrastructure to accelerate the realization of AGI.
Project Page: https://generalist.top/,
Leaderboard: https://generalist.top/leaderboard/,
Benchmark: https://huggingface.co/General-Level/. | Large Language Model, Multimodal Large Language Model, Multimodal Generalist, Evaluation, Benchmark | null | 2,912 | 2505.04620 | [
-0.007923800498247147,
-0.03342962637543678,
0.03134484961628914,
0.01342739537358284,
0.03036978282034397,
-0.014441375620663166,
0.03901706263422966,
0.03538735955953598,
-0.04166411608457565,
-0.006562179420143366,
-0.02523227408528328,
0.04673908278346062,
-0.07800176739692688,
-0.0010... | null |
83 | Sundial: A Family of Highly Capable Time Series Foundation Models | https://openreview.net/forum?id=LO7ciRpjI5 | [
"Yong Liu",
"Guo Qin",
"Zhiyuan Shi",
"Zhi Chen",
"Caiyin Yang",
"Xiangdong Huang",
"Jianmin Wang",
"Mingsheng Long"
] | Oral | We introduce Sundial, a family of native, flexible, and scalable time series foundation models. To predict the next-patch's distribution, we propose a TimeFlow Loss based on flow-matching, which facilitates native pre-training of Transformers on continuous-valued time series without discrete tokenization. Conditioned on arbitrary-length time series, our models are pre-trained without specifying any prior distribution and can generate multiple probable predictions, achieving more flexibility in representation learning than using parametric densities. Towards time series foundation models, we leverage minimal but crucial adaptations of Transformers and curate TimeBench with one trillion time points, comprising mostly real-world datasets and synthetic data. By mitigating mode collapse via TimeFlow Loss, we pre-train a family of Sundial models on TimeBench, which achieve unprecedented model capacity and generalization performance. In addition to excellent scalability, Sundial achieves state-of-the-art results on both point and probabilistic forecasting benchmarks with a just-in-time inference speed, i.e., making zero-shot predictions within a few milliseconds. We believe that Sundial's pioneering generative forecasting capability can improve model reliability in real-world decision-making. Code is available at: https://github.com/thuml/Sundial. | Time Series, Foundation Models | We introduce Sundial, a family of native, flexible, and scalable time series foundation models pre-trained on a trillion time points. | 2,877 | 2502.00816 | [
0.015037556178867817,
-0.06210596486926079,
0.006118363700807095,
0.016527321189641953,
0.03574110195040703,
0.05362454429268837,
0.0110111553221941,
0.011064582504332066,
-0.021262409165501595,
-0.049778588116168976,
0.028126250952482224,
0.001814396819099784,
-0.05761219561100006,
-0.004... | https://github.com/thuml/Sundial |
84 | Rényi Neural Processes | https://openreview.net/forum?id=qMt4KikFJg | [
"Xuesong Wang",
"He Zhao",
"Edwin V. Bonilla"
] | Oral | Neural Processes (NPs) are deep probabilistic models that represent stochastic processes by conditioning their prior distributions on a set of context points. Despite their advantages in uncertainty estimation for complex distributions, NPs enforce parameterization coupling between the conditional prior model and the posterior model. We show that this coupling amounts to prior misspecification and revisit the NP objective to address this issue. More specifically, we propose Rényi Neural Processes (RNP), a method that replaces the standard KL divergence with the Rényi divergence, dampening the effects of the misspecified prior during posterior updates. We validate our approach across multiple benchmarks including regression and image inpainting tasks, and show significant performance improvements of RNPs in real-world problems. Our extensive experiments show consistently better log-likelihoods over state-of-the-art NP models. | neural processes, Rényi divergence | Using Rényi divergence for robust inference of neural processes | 2,712 | null | [
-0.01271476224064827,
0.019029835239052773,
-0.019693022593855858,
0.033500466495752335,
0.03361405059695244,
0.07313226908445358,
0.014797920361161232,
0.008059333078563213,
-0.035049110651016235,
-0.05733388289809227,
-0.00271820230409503,
0.0013193993363529444,
-0.036572109907865524,
-0... | https://github.com/csiro-funml/renyineuralprocesses |
85 | Model Immunization from a Condition Number Perspective | https://openreview.net/forum?id=uitj69FqD5 | [
"Amber Yijia Zheng",
"Site Bai",
"Brian Bullins",
"Raymond A. Yeh"
] | Oral | Model immunization aims to pre-train models that are difficult to fine-tune on harmful tasks while retaining their utility on other non-harmful tasks. Though prior work has shown empirical evidence for immunizing text-to-image models, the key understanding of when immunization is possible and a precise definition of an immunized model remain unclear. In this work, we propose a framework, based on the condition number of a Hessian matrix, to analyze model immunization for linear models. Building on this framework, we design an algorithm with regularization terms to control the resulting condition numbers after pre-training. Empirical results on linear models and non-linear deep-nets demonstrate the effectiveness of the proposed algorithm on model immunization. The code is available at https://github.com/amberyzheng/model-immunization-cond-num. | Model Immunization, Optimization, Condition Number | null | 2,657 | 2505.23760 | [
-0.03449621796607971,
-0.005024863872677088,
-0.023665109649300575,
0.030638689175248146,
0.04235949367284775,
0.028501100838184357,
0.035547103732824326,
-0.007346604485064745,
-0.032523080706596375,
-0.03873944282531738,
0.008030110970139503,
0.010187453590333462,
-0.07816481590270996,
0... | https://github.com/amberyzheng/model-immunization-cond-num |
86 | Flowing Datasets with Wasserstein over Wasserstein Gradient Flows | https://openreview.net/forum?id=I1OHPb4zWo | [
"Clément Bonet",
"Christophe Vauthier",
"Anna Korba"
] | Oral | Many applications in machine learning involve data represented as probability distributions. The emergence of such data requires radically novel techniques to design tractable gradient flows on probability distributions over this type of (infinite-dimensional) objects. For instance, being able to flow labeled datasets is a core task for applications ranging from domain adaptation to transfer learning or dataset distillation. In this setting, we propose to represent each class by the associated conditional distribution of features, and to model the dataset as a mixture distribution supported on these classes (which are themselves probability distributions), meaning that labeled datasets can be seen as probability distributions over probability distributions. We endow this space with a metric structure from optimal transport, namely the Wasserstein over Wasserstein (WoW) distance, derive a differential structure on this space, and define WoW gradient flows. The latter enables to design dynamics over this space that decrease a given objective functional. We apply our framework to transfer learning and dataset distillation tasks, leveraging our gradient flow construction as well as novel tractable functionals that take the form of Maximum Mean Discrepancies with Sliced-Wasserstein based kernels between probability distributions. | Wasserstein gradient flows, optimal transport, datasets | We flow datasets using Wasserstein over Wasserstein gradient flows. | 2,603 | 2506.07534 | [
-0.02186880074441433,
-0.016913024708628654,
0.01227427925914526,
0.053298138082027435,
0.04020671173930168,
0.01398853212594986,
0.02193332463502884,
-0.00426305690780282,
0.013104761019349098,
-0.04243076965212822,
-0.028289681300520897,
-0.009550249204039574,
-0.06175412982702255,
-0.00... | https://github.com/clbonet/Flowing_Datasets_with_WoW_Gradient_Flows |
87 | Retrieval-Augmented Perception: High-resolution Image Perception Meets Visual RAG | https://openreview.net/forum?id=X9vBykZVYg | [
"Wenbin Wang",
"Yongcheng Jing",
"Liang Ding",
"Yingjie Wang",
"Li Shen",
"Yong Luo",
"Bo Du",
"Dacheng Tao"
] | Oral | High-resolution (HR) image perception remains a key challenge in multimodal large language models (MLLMs). To drive progress beyond the limits of heuristic methods, this paper advances HR perception capabilities of MLLMs by harnessing cutting-edge long-context techniques such as retrieval-augmented generation (RAG). Towards this end, this paper presents the first study exploring the use of RAG to address HR perception challenges. Specifically, we propose Retrieval-Augmented Perception (RAP), a training-free framework that retrieves and fuses relevant image crops while preserving spatial context using the proposed Spatial-Awareness Layout. To accommodate different tasks, the proposed Retrieved-Exploration Search (RE-Search) dynamically selects the optimal number of crops based on model confidence and retrieval scores. Experimental results on HR benchmarks demonstrate the significant effectiveness of RAP, with LLaVA-v1.5-13B achieving a 43\% improvement on $V^*$ Bench and 19\% on HR-Bench. Code is available at https://github.com/DreamMr/RAP. | Multimodal Large Language Models, High-resolution Image Perception | We propose Retrieval-Augmented Perception (RAP), a training-free framework that retrieves and fuses relevant image crops while preserving spatial context, with RE-Search dynamically selecting the optimal number of crops. | 2,560 | 2503.01222 | [
0.018899045884609222,
-0.00975438766181469,
-0.008702506311237812,
0.05086375027894974,
0.011852391064167023,
-0.02045593596994877,
0.01291133277118206,
0.014113467186689377,
-0.04538054019212723,
-0.016201501712203026,
-0.06891392916440964,
0.026195263490080833,
-0.06626014411449432,
-0.0... | https://github.com/DreamMr/RAP |
88 | Navigating Semantic Drift in Task-Agnostic Class-Incremental Learning | https://openreview.net/forum?id=M6L7Eaw9BW | [
"Fangwen Wu",
"Lechao Cheng",
"Shengeng Tang",
"Xiaofeng Zhu",
"Chaowei Fang",
"Dingwen Zhang",
"Meng Wang"
] | Oral | Class-incremental learning (CIL) seeks to enable a model to sequentially learn new classes while retaining knowledge of previously learned ones. Balancing flexibility and stability remains a significant challenge, particularly when the task ID is unknown. To address this, our study reveals that the gap in feature distribution between novel and existing tasks is primarily driven by differences in mean and covariance moments. Building on this insight, we propose a novel semantic drift calibration method that incorporates mean shift compensation and covariance calibration. Specifically, we calculate each class's mean by averaging its sample embeddings and estimate task shifts using weighted embedding changes based on their proximity to the previous mean, effectively capturing mean shifts for all learned classes with each new task. We also apply Mahalanobis distance constraint for covariance calibration, aligning class-specific embedding covariances between old and current networks to mitigate the covariance shift. Additionally, we integrate a feature-level self-distillation approach to enhance generalization. Comprehensive experiments on commonly used datasets demonstrate the effectiveness of our approach. The source code is available at https://github.com/fwu11/MACIL.git. | Class-incremental learning, continual learning | null | 2,505 | 2502.07560 | [
0.005663998890668154,
-0.04208245500922203,
-0.023852137848734856,
0.019246909767389297,
0.043917007744312286,
0.0014141451101750135,
0.06841679662466049,
0.01211283914744854,
0.00007952025043778121,
-0.040274810045957565,
-0.01456171553581953,
0.00637007737532258,
-0.08161727339029312,
-0... | https://github.com/fwu11/macil |
89 | Can MLLMs Reason in Multimodality? EMMA: An Enhanced MultiModal ReAsoning Benchmark | https://openreview.net/forum?id=v26vwjxOEz | [
"Yunzhuo Hao",
"Jiawei Gu",
"Huichen Will Wang",
"Linjie Li",
"Zhengyuan Yang",
"Lijuan Wang",
"Yu Cheng"
] | Oral | The ability to organically reason over and with both text and images is a pillar of human intelligence, yet the ability of Multimodal Large Language Models (MLLMs) to perform such multimodal reasoning remains under-explored. Existing benchmarks often emphasize text-dominant reasoning or rely on shallow visual cues, failing to adequately assess integrated visual and textual reasoning. We introduce EMMA (Enhanced MultiModal reAsoning), a benchmark targeting organic multimodal reasoning across mathematics, physics, chemistry, and coding. EMMA tasks demand advanced cross-modal reasoning that cannot be addressed by reasoning independently in each modality, offering an enhanced test suite for MLLMs' reasoning capabilities. Our evaluation of state-of-the-art MLLMs on EMMA reveals significant limitations in handling complex multimodal and multi-step reasoning tasks, even with advanced techniques like Chain-of-Thought prompting and test-time compute scaling underperforming. These findings underscore the need for improved multimodal architectures and training paradigms to close the gap between human and model reasoning in multimodality. | Benchmark, Multimodal, Reasoning | We contribute a challenging multimodal reasoning benchmark. | 2,325 | 2501.05444 | [
0.0018832217901945114,
-0.010278221219778061,
-0.004030812066048384,
0.049561623483896255,
0.04196949675679207,
-0.011172821745276451,
0.025431517511606216,
0.018542926758527756,
-0.04597066342830658,
-0.006609773728996515,
-0.01128731481730938,
0.031664613634347916,
-0.055418189615011215,
... | https://github.com/EMMA-Bench/EMMA |
90 | Improved Regret Analysis in Gaussian Process Bandits: Optimality for Noiseless Reward, RKHS norm, and Non-Stationary Variance | https://openreview.net/forum?id=ybno0ZP44z | [
"Shogo Iwazaki",
"Shion Takeno"
] | Oral | We study the Gaussian process (GP) bandit problem, whose goal is to minimize regret under an unknown reward function lying in some reproducing kernel Hilbert space (RKHS).
The maximum posterior variance analysis is vital in analyzing near-optimal GP bandit algorithms such as maximum variance reduction (MVR) and phased elimination (PE).
Therefore, we first show the new upper bound of the maximum posterior variance, which improves the dependence of the noise variance parameters of the GP. By leveraging this result, we refine the MVR and PE to obtain (i) a nearly optimal regret upper bound in the noiseless setting and (ii) regret upper bounds that are optimal with respect to the RKHS norm of the reward function. Furthermore, as another application of our proposed bound, we analyze the GP bandit under the time-varying noise variance setting, which is the kernelized extension of the linear bandit with heteroscedastic noise. For this problem, we show that MVR and PE-based algorithms achieve noise variance-dependent regret upper bounds, which matches our regret lower bound. | Gaussian process bandits, kernel bandits, noiseless setting | null | 2,194 | 2502.06363 | [
-0.023258959874510765,
0.012110925279557705,
0.021399706602096558,
0.040659043937921524,
0.025348931550979614,
0.03151937201619148,
0.044089026749134064,
0.002242014277726412,
0.0028734724037349224,
-0.06477286666631699,
-0.03732379153370857,
0.013882121071219444,
-0.0538196824491024,
-0.0... | null |
91 | CollabLLM: From Passive Responders to Active Collaborators | https://openreview.net/forum?id=DmH4HHVb3y | [
"Shirley Wu",
"Michel Galley",
"Baolin Peng",
"Hao Cheng",
"Gavin Li",
"Yao Dou",
"Weixin Cai",
"James Zou",
"Jure Leskovec",
"Jianfeng Gao"
] | Oral | Large Language Models are typically trained with next-turn rewards, limiting their ability to optimize for long-term interaction. As a result, they often respond passively to ambiguous or open-ended user requests, failing to help users reach their ultimate intents and leading to inefficient conversations. To address these limitations, we introduce CollabLLM, a novel and general training framework that enhances multiturn human-LLM collaboration. Its key innovation is a collaborative simulation that estimates the long-term contribution of responses
using Multiturn-aware Rewards. By reinforcement fine-tuning these rewards, CollabLLM goes beyond responding to user requests, and actively uncovers user intent and offers insightful suggestions—a key step towards more human-centered AI. We also devise a multiturn interaction benchmark with three challenging tasks such as document creation. CollabLLM significantly outperforms our baselines with averages of 18.5% higher task performance and 46.3% improved interactivity by LLM judges. Finally, we conduct a large user study with 201 judges, where CollabLLM increases user satisfaction by 17.6% and reduces user spent time by 10.4%. | Human-centered Large Language Model, Multiturn Interaction, Collaborative Problem-Solving, Reinforcement Learning | CollabLLM is a unified fine-tuning framework that optimizes LLMs for effective and efficient multiturn collaboration with users. | 1,940 | 2502.00640 | [
-0.019403759390115738,
-0.04497873783111572,
0.0005353566957637668,
0.025915784761309624,
0.021354516968131065,
0.01011186745017767,
0.009430786594748497,
0.03728891909122467,
-0.017686087638139725,
-0.018862394616007805,
-0.033966463059186935,
0.02769932523369789,
-0.05478730425238609,
-0... | https://github.com/Wuyxin/collabllm |
92 | Suitability Filter: A Statistical Framework for Classifier Evaluation in Real-World Deployment Settings | https://openreview.net/forum?id=V0w8Kj3K6L | [
"Angéline Pouget",
"Mohammad Yaghini",
"Stephan Rabanser",
"Nicolas Papernot"
] | Oral | Deploying machine learning models in safety-critical domains poses a key challenge: ensuring reliable model performance on downstream user data without access to ground truth labels for direct validation. We propose the _suitability filter_, a novel framework designed to detect performance deterioration by utilizing _suitability signals_—model output features that are sensitive to covariate shifts and indicative of potential prediction errors. The suitability filter evaluates whether classifier accuracy on unlabeled user data shows significant degradation compared to the accuracy measured on the labeled test dataset. Specifically, it ensures that this degradation does not exceed a pre-specified margin, which represents the maximum acceptable drop in accuracy. To achieve reliable performance evaluation, we aggregate suitability signals for both test and user data and compare these empirical distributions using statistical hypothesis testing, thus providing insights into decision uncertainty. Our modular method adapts to various models and domains. Empirical evaluations across different classification tasks demonstrate that the suitability filter reliably detects performance deviations due to covariate shift. This enables proactive mitigation of potential failures in high-stakes applications. | suitability, reliability, robustness, classifier, unlabeled data | The suitability filter detects performance degradation in machine learning models by comparing accuracy on unlabeled user data and labeled test data. | 1,934 | 2505.22356 | [
-0.015520550310611725,
-0.02327088639140129,
-0.008124344982206821,
0.014171960763633251,
0.08035266399383545,
0.022335663437843323,
-0.007276972755789757,
-0.03144684433937073,
-0.04286658763885498,
-0.03612739220261574,
0.00704122195020318,
0.03080708533525467,
-0.05881519988179207,
-0.0... | https://github.com/cleverhans-lab/suitability |
93 | Partition First, Embed Later: Laplacian-Based Feature Partitioning for Refined Embedding and Visualization of High-Dimensional Data | https://openreview.net/forum?id=6CwO5nVvku | [
"Erez Peterfreund",
"Ofir Lindenbaum",
"Yuval Kluger",
"Boris Landa"
] | Oral | Embedding and visualization techniques are essential for analyzing high-dimensional data, but they often struggle with complex data governed by multiple latent variables, potentially distorting key structural characteristics. This paper considers scenarios where the observed features can be partitioned into mutually exclusive subsets, each capturing a different smooth substructure. In such cases, visualizing the data based on each feature partition can better characterize the underlying processes and structures in the data, leading to improved interpretability. To partition the features, we propose solving an optimization problem that promotes graph Laplacian-based smoothness in each partition, thereby prioritizing partitions with simpler geometric structures. Our approach generalizes traditional embedding and visualization techniques, allowing them to learn multiple embeddings simultaneously. We establish that if several independent or partially dependent manifolds are embedded in distinct feature subsets in high-dimensional space, then our framework can reliably identify the correct subsets with theoretical guarantees. Finally, we demonstrate the effectiveness of our approach in extracting multiple low-dimensional structures and partially independent processes from both simulated and real data. | data visualization, dimensionality reduction, manifold learning, data embedding, feature partitioning | We present a feature partitioning approach for embedding and visualizing multiple low-dimensional structures within high-dimensional data | 1,917 | null | [
-0.007627582643181086,
-0.036162301898002625,
0.020718861371278763,
0.039161939173936844,
0.04476344212889671,
0.05337826907634735,
0.014466870576143265,
-0.01375425886362791,
-0.02038944698870182,
-0.05472871661186218,
0.0023969023022800684,
-0.03686713054776192,
-0.08415422588586807,
0.0... | https://github.com/erezpeter/Feature_Partition |
94 | Blink of an eye: a simple theory for feature localization in generative models | https://openreview.net/forum?id=QvqnPVGWAN | [
"Marvin Li",
"Aayush Karan",
"Sitan Chen"
] | Oral | Large language models can exhibit unexpected behavior in the blink of an eye. In a recent computer use demo, a language model switched from coding to Googling pictures of Yellowstone, and these sudden shifts in behavior have also been observed in reasoning patterns and jailbreaks. This phenomenon is not unique to autoregressive models: in diffusion models, key features of the final output are decided in narrow ``critical windows'' of the generation process. In this work we develop a simple, unifying theory to explain this phenomenon. Using the formalism of stochastic localization for generative models, we show that it emerges generically as the generation process localizes to a sub-population of the distribution it models. While critical windows have been studied at length in diffusion models, existing theory heavily relies on strong distributional assumptions and the particulars of Gaussian diffusion. In contrast to existing work our theory (1) applies to autoregressive and diffusion models; (2) makes very few distributional assumptions; (3) quantitatively improves previous bounds even when specialized to diffusions; and (4) requires basic mathematical tools. Finally, we validate our predictions empirically for LLMs and find that critical windows often coincide with failures in problem solving for various math and reasoning benchmarks. | stochastic localization, theory of diffusion, large language models, interpretability, reasoning | A simple, general, and unifying theory for feature localization in language and diffusion models | 1,904 | 2502.00921 | [
-0.013237161561846733,
0.00513472780585289,
-0.009355620481073856,
0.051768045872449875,
0.027582772076129913,
0.03344337269663811,
0.037275925278663635,
0.04004138708114624,
-0.02852078154683113,
-0.02658463642001152,
-0.014168769121170044,
-0.00017876776109915227,
-0.06584669649600983,
-... | https://github.com/marvinli-harvard/critical-windows-lm |
95 | On Differential Privacy for Adaptively Solving Search Problems via Sketching | https://openreview.net/forum?id=kEn7Wt6Yj2 | [
"Shiyuan Feng",
"Ying Feng",
"George Zhaoqi Li",
"Zhao Song",
"David Woodruff",
"Lichen Zhang"
] | Oral | Recently differential privacy has been used for a number of streaming, data structure, and dynamic graph problems as a means of hiding the internal randomness of the data structure, so that multiple possibly adaptive queries can be made without sacrificing the correctness of the responses. Although these works use differential privacy to show that for some problems it is possible to tolerate $T$ queries using $\widetilde{O}(\sqrt{T})$ copies of a data structure, such results only apply to {\it numerical estimation problems}, and only return the {\it cost} of an optimization problem rather than the solution itself. In this paper we investigate the use of differential privacy for adaptive queries to {\it search} problems, which are significantly more challenging since the responses to queries can reveal much more about the internal randomness than a single numerical query. We focus on two classical search problems: nearest neighbor queries and regression with arbitrary turnstile updates. We identify key parameters to these problems, such as the number of $c$-approximate near neighbors and the matrix condition number, and use different differential privacy techniques to design algorithms returning the solution point or solution vector with memory and time depending on these parameters. We give algorithms for each of these problems that achieve similar tradeoffs. | data structure, adaptive robustness, differential privacy | We develop data structures for search problems such as regression and nearest neighbor search that are robust against an adaptive adversary. | 1,638 | 2506.05503 | [
-0.030447781085968018,
0.0014767537359148264,
0.0032730200327932835,
0.07836337387561798,
0.05619911476969719,
0.03697753697633743,
0.035688240081071854,
-0.009683913551270962,
-0.02585800737142563,
-0.06066088005900383,
-0.012125000357627869,
-0.03488576039671898,
-0.06063583865761757,
0.... | null |
96 | Exploring and Mitigating Adversarial Manipulation of Voting-Based Leaderboards | https://openreview.net/forum?id=zf9zwCRKyP | [
"Yangsibo Huang",
"Milad Nasr",
"Anastasios Nikolas Angelopoulos",
"Nicholas Carlini",
"Wei-Lin Chiang",
"Christopher A. Choquette-Choo",
"Daphne Ippolito",
"Matthew Jagielski",
"Katherine Lee",
"Ken Liu",
"Ion Stoica",
"Florian Tramèr",
"Chiyuan Zhang"
] | Oral | It is now common to evaluate Large Language Models (LLMs) by having humans manually vote to evaluate model outputs, in contrast to typical benchmarks that evaluate knowledge or skill at some particular task. Chatbot Arena, the most popular benchmark of this type, ranks models by asking users to select the better response between two randomly selected models (without revealing which model was responsible for the generations). These platforms are widely trusted as a fair and accurate measure of LLM capabilities. In this paper, we show that if bot protection and other defenses are not implemented, these voting-based benchmarks are potentially vulnerable to adversarial manipulation. Specifically, we show that an attacker can alter the leaderboard (to promote their favorite model or demote competitors) at the cost of roughly a thousand votes (verified in a simulated, offline version of Chatbot Arena). Our attack consists of two steps: first, we show how an attacker can determine which model was used to generate a given reply with more than $95\%$ accuracy; and then, the attacker can use this information to consistently vote for (or against) a target model. Working with the Chatbot Arena developers, we identify, propose, and implement mitigations to improve the robustness of Chatbot Arena against adversarial manipulation, which, based on our analysis, substantially increases the cost of such attacks. Some of these defenses were present before our collaboration, such as bot protection with Cloudflare, malicious user detection, and rate limiting. Others, including reCAPTCHA and login are being integrated to strengthen the security in Chatbot Arena. | Security, LLM leaderboard, LLM evaluation | null | 1,637 | 2501.07493 | [
-0.019309069961309433,
-0.04264960438013077,
0.002011202508583665,
0.039675191044807434,
0.00936762522906065,
-0.00571359833702445,
0.04569513723254204,
0.000013460569789458532,
-0.03479170799255371,
-0.0030824528075754642,
-0.037948768585920334,
0.02459915727376938,
-0.06590693444013596,
... | null |
97 | What Limits Virtual Agent Application? OmniBench: A Scalable Multi-Dimensional Benchmark for Essential Virtual Agent Capabilities | https://openreview.net/forum?id=4tFSKOY2mT | [
"Wendong Bu",
"Yang Wu",
"Qifan Yu",
"Minghe Gao",
"Bingchen Miao",
"Zhenkui Zhang",
"Kaihang Pan",
"liyunfei",
"Mengze Li",
"Wei Ji",
"Juncheng Li",
"Siliang Tang",
"Yueting Zhuang"
] | Oral | As multimodal large language models (MLLMs) advance, MLLM-based virtual agents have demonstrated remarkable performance. However, existing benchmarks face significant limitations, including uncontrollable task complexity, extensive manual annotation, and a lack of multidimensional evaluation. In response to these challenges, we introduce OmniBench, a self-generating, graph-based benchmark with an automated pipeline for synthesizing tasks of controllable complexity through subtask composition. To evaluate the diverse capabilities of virtual agents on the graph, we further present OmniEval, a multidimensional evaluation framework that includes subtask-level evaluation, graph-based metrics, and comprehensive tests across 10 capabilities. Our synthesized dataset contains 36k graph-structured tasks across 20 scenarios, achieving a 91% human acceptance rate. Training on our graph-structured data shows that it improves generalization across environments. We conduct multidimensional evaluations for virtual agents, revealing their performance across various capabilities and paving the way for future advancements. Our project is available at https://omni-bench.github.io. | Virtual Agent; Digital Agent; Multidimensional Benchmark | null | 1,439 | 2506.08933 | [
-0.000245263654505834,
-0.02174264006316662,
0.023878159001469612,
0.03886210545897484,
0.022258702665567398,
0.0006736441282555461,
0.039135709404945374,
0.011561647057533264,
-0.024155013263225555,
-0.04383229464292526,
-0.0015289620496332645,
-0.007743433583527803,
-0.06910879909992218,
... | null |
98 | Fundamental Bias in Inverting Random Sampling Matrices with Application to Sub-sampled Newton | https://openreview.net/forum?id=LwQGRGJTHw | [
"Chengmei Niu",
"Zhenyu Liao",
"Zenan Ling",
"Michael W. Mahoney"
] | Oral | A substantial body of work in machine learning (ML) and randomized numerical linear algebra (RandNLA) has exploited various sorts of random sketching methodologies, including random sampling and random projection, with much of the analysis using Johnson--Lindenstrauss and subspace embedding techniques. Recent studies have identified the issue of *inversion bias* -- the phenomenon that inverses of random sketches are *not* unbiased, despite the unbiasedness of the sketches themselves. This bias presents challenges for the use of random sketches in various ML pipelines, such as fast stochastic optimization, scalable statistical estimators, and distributed optimization. In the context of random projection, the inversion bias can be easily corrected for dense Gaussian projections (which are, however, too expensive for many applications). Recent work has shown how the inversion bias can be corrected for sparse sub-gaussian projections. In this paper, we show how the inversion bias can be corrected for random sampling methods, both uniform and non-uniform leverage-based, as well as for structured random projections, including those based on the Hadamard transform. Using these results, we establish problem-independent local convergence rates for sub-sampled Newton methods. | Inversion bias, random matrix theory, randomized numerical linear algebra, random sampling, sub-sampled Newton | Use RMT to characterize inversion bias for random sampling, propose debiasing, and apply to establish problem-independent convergence rate for SSN. | 1,426 | 2502.13583 | [
-0.017938891425728798,
-0.04057541489601135,
-0.005176707170903683,
0.05715867877006531,
0.02687077410519123,
0.021107781678438187,
0.02669535204768181,
0.019668659195303917,
-0.027308497577905655,
-0.0619734451174736,
-0.005025265272706747,
-0.029645511880517006,
-0.06040424108505249,
-0.... | null |
99 | Normalizing Flows are Capable Generative Models | https://openreview.net/forum?id=2uheUFcFsM | [
"Shuangfei Zhai",
"Ruixiang ZHANG",
"Preetum Nakkiran",
"David Berthelot",
"Jiatao Gu",
"Huangjie Zheng",
"Tianrong Chen",
"Miguel Ángel Bautista",
"Navdeep Jaitly",
"Joshua M. Susskind"
] | Oral | Normalizing Flows (NFs) are likelihood-based models for continuous inputs. They have demonstrated promising results on both density estimation and generative modeling tasks, but have received relatively little attention in recent years. In this work, we demonstrate that NFs are more powerful than previously believed. We present TarFlow: a simple and scalable architecture that enables highly performant NF models. TarFlow can be thought of as a Transformer-based variant of Masked Autoregressive Flows (MAFs): it consists of a stack of autoregressive Transformer blocks on image patches, alternating the autoregression direction between layers. TarFlow is straightforward to train end-to-end, and capable of directly modeling and generating pixels. We also propose three key techniques to improve sample quality: Gaussian noise augmentation during training, a post training denoising procedure, and an effective guidance method for both class-conditional and unconditional settings. Putting these together, TarFlow sets new state-of-the-art results on likelihood estimation for images, beating the previous best methods by a large margin, and generates samples with quality and diversity comparable to diffusion models, for the first time with a stand-alone NF model. We make our code available at https://github.com/apple/ml-tarflow. | Normalizing flows, autoregressive models, likelihood estimation | We show that normalizing flows can work great as a generative modeling principle, and propose a simple architecture and set of techniques to achieve it. | 1,374 | 2412.06329 | [
0.028636621311306953,
-0.041483208537101746,
0.004399371333420277,
0.04593415930867195,
0.04374518617987633,
0.0618305429816246,
0.020943956449627876,
-0.0015397499082610011,
-0.020746489986777306,
-0.06115875765681267,
-0.0052260649390518665,
-0.029285598546266556,
-0.05722849816083908,
0... | https://github.com/apple/ml-tarflow |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.