paper_id
uint32
0
3.26k
title
stringlengths
15
150
paper_url
stringlengths
42
42
authors
listlengths
1
21
type
stringclasses
3 values
abstract
stringlengths
393
2.58k
keywords
stringlengths
5
409
TL;DR
stringlengths
7
250
submission_number
int64
1
16.4k
arxiv_id
stringlengths
10
10
embedding
listlengths
768
768
github
stringlengths
26
123
200
Portable Reward Tuning: Towards Reusable Fine-Tuning across Different Pretrained Models
https://openreview.net/forum?id=cYNBsMTAVL
[ "Daiki Chijiwa", "Taku Hasegawa", "Kyosuke Nishida", "Kuniko Saito", "Susumu Takeuchi" ]
Poster
While foundation models have been exploited for various expert tasks with their fine-tuned parameters, any foundation model will be eventually outdated due to its old knowledge or limited capability, and thus should be replaced by a new foundation model. Subsequently, to benefit from its latest knowledge or improved ca...
inference-time tuning, reward maximization
null
15,647
2502.12776
[ -0.005974546540528536, -0.027006598189473152, -0.00017963968275580555, 0.03618025407195091, 0.0646805688738823, 0.02943721041083336, 0.02252942882478237, 0.001321877003647387, -0.03230111300945282, -0.03611375764012337, -0.01610499620437622, 0.05043309926986694, -0.04195476695895195, -0.02...
null
201
LOB-Bench: Benchmarking Generative AI for Finance - an Application to Limit Order Book Data
https://openreview.net/forum?id=CXPpYJpYXQ
[ "Peer Nagy", "Sascha Yves Frey", "Kang Li", "Bidipta Sarkar", "Svitlana Vyetrenko", "Stefan Zohren", "Ani Calinescu", "Jakob Nicolaus Foerster" ]
Poster
While financial data presents one of the most challenging and interesting sequence modelling tasks due to high noise, heavy tails, and strategic interactions, progress in this area has been hindered by the lack of consensus on quantitative evaluation paradigms. To address this, we present **LOB-Bench**, a benchmark, i...
finance, generative models, time series, state-space models, benchmark
LOB-Bench offers a rigorous framework and open-source Python package for standardized evaluation of generative limit order book data models, addressing evaluation gaps and enhancing model comparisons with quantitative metrics.
15,644
null
[ -0.01277895923703909, -0.044222183525562286, -0.02796485833823681, 0.038647543638944626, 0.022061878815293312, 0.029628925025463104, -0.009959861636161804, 0.03692074120044708, -0.005737797357141972, -0.01077194046229124, 0.016870252788066864, 0.0233281459659338, -0.0867723822593689, 0.004...
https://github.com/peernagy/lob_bench
202
Comparing Comparisons: Informative and Easy Human Feedback with Distinguishability Queries
https://openreview.net/forum?id=Cf8gsqWrua
[ "Xuening Feng", "Zhaohui JIANG", "Timo Kaufmann", "Eyke Hüllermeier", "Paul Weng", "Yifei Zhu" ]
Poster
Learning human objectives from preference feedback has significantly advanced reinforcement learning (RL) in domains where objectives are hard to formalize. However, traditional methods based on pairwise trajectory comparisons face notable challenges, including the difficulty in comparing trajectories with subtle diff...
Reinforcement Learning from Human Feedback, Preference-based Reinforcement Learning, Human-in-the-loop Machine Learning
null
15,643
null
[ -0.025567354634404182, -0.015685604885220528, 0.010983878746628761, 0.05624179169535637, 0.026758559048175812, 0.007314339745789766, 0.020544636994600296, -0.008751153014600277, -0.02985978126525879, -0.03413157910108566, -0.036536604166030884, 0.03984537348151207, -0.056102972477674484, -...
null
203
BounDr.E: Predicting Drug-likeness via Biomedical Knowledge Alignment and EM-like One-Class Boundary Optimization
https://openreview.net/forum?id=Z9Xugry05b
[ "Dongmin Bang", "Inyoung Sung", "Yinhua Piao", "Sangseon Lee", "Sun Kim" ]
Poster
The advent of generative AI now enables large-scale $\textit{de novo}$ design of molecules, but identifying viable drug candidates among them remains an open problem. Existing drug-likeness prediction methods often rely on ambiguous negative sets or purely structural features, limiting their ability to accurately class...
Drug-likeness, Expectation-Maximization, Multi-modal learning, AI Drug Discovery
Drug-likeness prediction framework based on EM-like one-class boundary optimization with multi-modal alignment of biomedical knowledge graph and structural space.
15,634
null
[ 0.018348243087530136, -0.010596939362585545, -0.016039540991187096, 0.025284355506300926, 0.06475017219781876, -0.030232718214392662, 0.009158877655863762, -0.06138798967003822, 0.03593878820538521, -0.0316440686583519, 0.0027634035795927048, 0.013251313008368015, -0.08898905664682388, 0.0...
null
204
Private Federated Learning using Preference-Optimized Synthetic Data
https://openreview.net/forum?id=ZuaU2bYzlc
[ "Charlie Hou", "Mei-Yu Wang", "Yige Zhu", "Daniel Lazar", "Giulia Fanti" ]
Poster
In practical settings, differentially private federated learning (DP-FL) is the dominant method for training models from private, on-device client data. Recent work has suggested that DP-FL may be enhanced or outperformed by methods that use DP synthetic data (Wu et al., 2024; Hou et al., 2024). The primary algorithms...
Differential privacy, large language models, synthetic data, federated learning, preference optimization, reinforcement learning
We cast private on-device learning (under the synthetic data framework) as an LLM preference optimization problem, and greatly improve the state of the art.
15,630
2504.16438
[ 0.0060624051839113235, -0.022069120779633522, 0.012321260757744312, 0.07747496664524078, 0.05628231540322304, 0.02460765838623047, 0.02810048870742321, -0.02121553011238575, 0.0043147956021130085, -0.0342269241809845, 0.002386972773820162, 0.007658328395336866, -0.05735692009329796, 0.0029...
https://github.com/meiyuw/POPri
205
Provable Benefit of Random Permutations over Uniform Sampling in Stochastic Coordinate Descent
https://openreview.net/forum?id=KBUSuiLBMq
[ "Donghwa Kim", "Jaewook Lee", "Chulhee Yun" ]
Poster
We analyze the convergence rates of two popular variants of coordinate descent (CD): random CD (RCD), in which the coordinates are sampled uniformly at random, and random-permutation CD (RPCD), in which random permutations are used to select the update indices. Despite abundant empirical evidence that RPCD outperforms ...
Optimization, Stochastic Optimization, Coordinate Descent, Random Permutations
We prove that using indices from random permutations for coordinate descent (RPCD) is provably faster than using uniformly sampled indices (RCD) for a subclass of positive definite quadratic functions.
15,629
2505.23152
[ -0.02737405151128769, -0.026123512536287308, 0.007012694608420134, 0.07725202292203903, 0.0379529669880867, 0.023253291845321655, 0.006371787283569574, 0.008090315386652946, -0.015932338312268257, -0.044975992292165756, -0.0002796603657770902, -0.025903288275003433, -0.06391479820013046, -...
null
206
Optimal and Practical Batched Linear Bandit Algorithm
https://openreview.net/forum?id=WcFLasjwXs
[ "Sanghoon Yu", "Min-hwan Oh" ]
Poster
We study the linear bandit problem under limited adaptivity, known as the batched linear bandit. While existing approaches can achieve near-optimal regret in theory, they are often computationally prohibitive or underperform in practice. We propose BLAE, a novel batched algorithm that integrates arm elimination with re...
linear bandit, batched bandit, exploration-exploitation
null
15,612
2507.08438
[ 0.004151431377977133, 0.009913618676364422, 0.01569448411464691, 0.011247048154473305, 0.04983857646584511, 0.04317551851272583, 0.03476109728217125, 0.010470116510987282, -0.03223991021513939, -0.047696828842163086, -0.007355368696153164, -0.021043851971626282, -0.06194404512643814, -0.01...
null
207
SAE-V: Interpreting Multimodal Models for Enhanced Alignment
https://openreview.net/forum?id=S4HPn5Bo6k
[ "Hantao Lou", "Changye Li", "Jiaming Ji", "Yaodong Yang" ]
Poster
With the integration of image modality, the semantic space of multimodal large language models (MLLMs) is more complex than text-only models, making their interpretability more challenging and their alignment less stable, particularly susceptible to low-quality data, which can lead to inconsistencies between modalities...
interpretability, alignment, multimodal large language model
We propose SAE-V, a mechanistic interpretability framework for multimodal large language models to analyze multimodal feature and enhance their alignment.
15,602
null
[ 0.02661067433655262, 0.0021285456605255604, 0.0029011168517172337, 0.0293342936784029, 0.02894461341202259, 0.051334839314222336, 0.05250173434615135, 0.012039435096085072, -0.035310521721839905, -0.01726171188056469, -0.010550279170274734, 0.03979611396789551, -0.057696059346199036, 0.022...
https://github.com/PKU-Alignment/SAE-V
208
OR-Bench: An Over-Refusal Benchmark for Large Language Models
https://openreview.net/forum?id=CdFnEu0JZV
[ "Justin Cui", "Wei-Lin Chiang", "Ion Stoica", "Cho-Jui Hsieh" ]
Poster
Large Language Models (LLMs) require careful safety alignment to prevent malicious outputs. While significant research focuses on mitigating harmful content generation, the enhanced safety often come with the side effect of over-refusal, where LLMs may reject innocuous prompts and become less helpful. Although the iss...
safety, llm, over-refusal
an over-refusal benchmark for large language model
15,599
null
[ -0.03918001055717468, -0.04338321462273598, -0.01758507452905178, 0.03354141488671303, 0.015826663002371788, 0.014336252585053444, 0.032396163791418076, 0.01862679049372673, -0.03610946983098984, 0.00473130913451314, -0.031109889969229698, 0.01179045531898737, -0.06755173951387405, -0.0217...
https://github.com/justincui03/or-bench
209
TRACE Back from the Future: A Probabilistic Reasoning Approach to Controllable Language Generation
https://openreview.net/forum?id=LhkSfpfRXW
[ "Gwen Yidou Weng", "Benjie Wang", "Guy Van den Broeck" ]
Poster
As large language models (LMs) advance, there is an increasing need to control their outputs to align with human values (e.g., detoxification) or desired attributes (e.g., personalization, topic). However, autoregressive models focus on next-token predictions and struggle with global properties that require looking ahe...
Controlled Generation, Probabilistic Reasoning, Tractable Inference, Hidden Markov Models (HMM), Detoxification
TRACE achieves efficient and adaptable control of language models by using a one-time distilled HMM and a lightweight log-linear classifier to perform exact probabilistic reasoning over future text.
15,596
2504.18535
[ -0.03144156560301781, 0.011673332192003727, -0.017381547018885612, 0.0239377673715353, 0.057024091482162476, 0.026876114308834076, 0.042621683329343796, 0.038221944123506546, -0.013793901540338993, -0.029892180114984512, -0.014679092913866043, 0.026195405051112175, -0.0513133630156517, -0....
https://github.com/yidouweng/trace
210
AtlasD: Automatic Local Symmetry Discovery
https://openreview.net/forum?id=aLDAu7QDw0
[ "Manu Bhat", "Jonghyun Park", "Jianke Yang", "Nima Dehmamy", "Robin Walters", "Rose Yu" ]
Poster
Existing symmetry discovery methods predominantly focus on global transformations across the entire system or space, but they fail to consider the symmetries in local neighborhoods. This may result in the reported symmetry group being a misrepresentation of the true symmetry. In this paper, we formalize the notion of l...
Local symmetry discovery, symmetry discovery, equivariance, gauge equivariant neural network, Lie theory
A symmetry discovery method capable of learning local transformations.
15,595
2504.10777
[ -0.01690518483519554, 0.0034752548672258854, -0.008452215231955051, 0.032751381397247314, 0.008949433453381062, 0.02207341603934765, 0.010533868335187435, 0.004589957185089588, -0.03841960057616234, -0.039424605667591095, 0.012342535890638828, -0.04358449578285217, -0.050247929990291595, 0...
https://github.com/Rose-STL-Lab/AtlasD
211
Generalization Analysis for Supervised Contrastive Representation Learning under Non-IID Settings
https://openreview.net/forum?id=kWSRVtuIuH
[ "Nong Minh Hieu", "Antoine Ledent" ]
Poster
Contrastive Representation Learning (CRL) has achieved impressive success in various domains in recent years. Nevertheless, the theoretical understanding of the generalization behavior of CRL has remained limited. Moreover, to the best of our knowledge, the current literature only analyzes generalization bounds under t...
Contrastive Learning, Generalization Analysis
We derive generalization bounds for CRL when data is limited to a fixed pool of labeled, reusable data points across tuples
15,584
2505.04937
[ -0.015555732883512974, -0.018997246399521828, -0.003417306113988161, 0.04728379100561142, 0.017864227294921875, 0.029824500903487206, 0.027188843116164207, 0.003321979893371463, -0.03786720335483551, 0.009234640747308731, -0.035942792892456055, 0.014274485409259796, -0.08092272281646729, 0...
null
212
A Theoretical Justification for Asymmetric Actor-Critic Algorithms
https://openreview.net/forum?id=F1yANMCnAn
[ "Gaspard Lambrechts", "Damien Ernst", "Aditya Mahajan" ]
Poster
In reinforcement learning for partially observable environments, many successful algorithms have been developed within the asymmetric learning paradigm. This paradigm leverages additional state information available at training time for faster learning. Although the proposed learning objectives are usually theoreticall...
Partially Observable Environment, Asymmetric Learning, Privileged Information, Privileged Critic, Convergence Analysis, Asymmetric Actor-Critic, Finite-Time Bound, Agent-State Policy, Aliasing
We derive a finite-time bound for an asymmetric actor-critic algorithm and compare it with the symmetric counterpart, offering a justification for the effectiveness of asymmetric learning
15,582
2501.19116
[ -0.02491084486246109, -0.03515004739165306, -0.026232730597257614, 0.02072093077003956, 0.03042580559849739, 0.030217375606298447, 0.027560334652662277, 0.007484325673431158, -0.029283951967954636, -0.022589612752199173, 0.019056253135204315, 0.017411353066563606, -0.09007055312395096, -0....
null
213
Reducing Confounding Bias without Data Splitting for Causal Inference via Optimal Transport
https://openreview.net/forum?id=fd7ddFBNmP
[ "Yuguang Yan", "Zongyu Li", "Haolin Yang", "Zeqin Yang", "Hao Zhou", "Ruichu Cai", "Zhifeng Hao" ]
Poster
Causal inference seeks to estimate the effect given a treatment such as a medicine or the dosage of a medication. To reduce the confounding bias caused by the non-randomized treatment assignment, most existing methods reduce the shift between subpopulations receiving different treatments. However, these methods split l...
Causal inference, continuous treatment, optimal transport
null
15,581
null
[ -0.0055163404904305935, -0.022879019379615784, -0.04057334363460541, 0.04433216527104378, 0.026921279728412628, 0.01238779816776514, 0.03883914649486542, -0.002489388920366764, 0.0029169931076467037, -0.06804607063531876, 0.007618395611643791, 0.010652336291968822, -0.08862534910440445, 0....
null
214
Provably Near-Optimal Federated Ensemble Distillation with Negligible Overhead
https://openreview.net/forum?id=6znPjYn11w
[ "Won-Jun Jang", "Hyeon-Seo Park", "Si-Hyeon Lee" ]
Poster
Federated ensemble distillation addresses client heterogeneity by generating pseudo-labels for an unlabeled server dataset based on client predictions and training the server model using the pseudo-labeled dataset. The unlabeled server dataset can either be pre-existing or generated through a data-free approach. The ef...
Federated learning, ensemble distillation, data heterogeneity, generative adversarial network
null
15,561
2502.06349
[ -0.01819850131869316, -0.03505121171474457, 0.00040813072700984776, 0.06958059966564178, 0.0379326306283474, -0.009438022039830685, 0.01459731999784708, -0.023803742602467537, -0.0038283970206975937, -0.028649557381868362, -0.02263687178492546, -0.02390792779624462, -0.06055254489183426, 0...
https://github.com/pupiu45/FedGO
215
Ensemble Distribution Distillation via Flow Matching
https://openreview.net/forum?id=waeJHU2oeI
[ "Jonggeon Park", "Giung Nam", "Hyunsu Kim", "Jongmin Yoon", "Juho Lee" ]
Poster
Neural network ensembles have proven effective in improving performance across a range of tasks; however, their high computational cost limits their applicability in resource-constrained environments or for large models. Ensemble distillation, the process of transferring knowledge from an ensemble teacher to a smaller ...
Ensemble Distillation, Flow Matching
We distilled ensemble distribution using flow matching
15,544
null
[ -0.007507257163524628, -0.010290231555700302, -0.029158148914575577, 0.05866030976176262, 0.06331286579370499, 0.029908692464232445, 0.007980698719620705, -0.01897868886590004, -0.020797086879611015, -0.036885716021060944, -0.014866532757878304, -0.02221549302339554, -0.06964664906263351, ...
https://github.com/Park-Jong-Geon/EDFM
216
Deep Ridgelet Transform and Unified Universality Theorem for Deep and Shallow Joint-Group-Equivariant Machines
https://openreview.net/forum?id=JKsxKPXXUd
[ "Sho Sonoda", "Yuka Hashimoto", "Isao Ishikawa", "Masahiro Ikeda" ]
Poster
We present a constructive universal approximation theorem for learning machines equipped with joint-group-equivariant feature maps, called the joint-equivariant machines, based on the group representation theory. ``Constructive'' here indicates that the distribution of parameters is given in a closed-form expression kn...
ridgelet transform, group representation, harmonic analysis
null
15,537
2405.13682
[ -0.041758760809898376, -0.010775129310786724, 0.03310362622141838, 0.017117731273174286, 0.04082312807440758, 0.04511062055826187, 0.030602358281612396, 0.008046767674386501, -0.028283514082431793, -0.03333983197808266, -0.024490151554346085, -0.0124770887196064, -0.0736970603466034, 0.006...
null
217
Internal Causal Mechanisms Robustly Predict Language Model Out-of-Distribution Behaviors
https://openreview.net/forum?id=Ofa1cspTrv
[ "Jing Huang", "Junyi Tao", "Thomas Icard", "Diyi Yang", "Christopher Potts" ]
Poster
Interpretability research now offers a variety of techniques for identifying abstract internal mechanisms in neural networks. Can such techniques be used to predict how models will behave on out-of-distribution examples? In this work, we provide a positive answer to this question. Through a diverse set of language mode...
Causal Abstraction, Causal Interpretability, OOD, Correctness Prediction
Internal causal mechanisms robustly predict language model out-of-distribution behaviors.
15,536
2505.11770
[ -0.03610115125775337, 0.0035999626852571964, -0.03202052786946297, 0.038765352219343185, 0.05349305272102356, 0.02966984361410141, 0.04193532466888428, 0.012230025604367256, -0.04063727706670761, -0.013531431555747986, -0.007680057547986507, 0.02962580882012844, -0.017708055675029755, 0.00...
https://github.com/explanare/ood-prediction
218
Adversarial Robustness via Deformable Convolution with Stochasticity
https://openreview.net/forum?id=vISiVCssVg
[ "Yanxiang Ma", "Zixuan Huang", "Minjing Dong", "Shan You", "Chang Xu" ]
Poster
Random defense represents a promising strategy to protect neural networks from adversarial attacks. Most of these methods enhance robustness by injecting randomness into the data, increasing uncertainty for attackers. However, this randomness could reduce the generalization capacity of defense, as defense performance c...
Adversarial Robustness; Mechine Learning
This paper introduced a random structural adversarial defense method called DCS, inspired by DeConv, and an adaptive AT method for DCS to help it reach and exceed SOTA level.
15,534
null
[ -0.0021281030494719744, -0.02358277142047882, -0.016034653410315514, 0.049505654722452164, 0.010040785185992718, 0.03695644810795784, 0.038767121732234955, -0.002291410695761442, -0.015278270468115807, -0.06430879980325699, 0.000755361164920032, -0.037427857518196106, -0.03243357315659523, ...
https://github.com/theSleepyPig/Deformable_Convolution_with_Stochasticity
219
UDora: A Unified Red Teaming Framework against LLM Agents by Dynamically Hijacking Their Own Reasoning
https://openreview.net/forum?id=pRmxQHgjb1
[ "Jiawei Zhang", "Shuang Yang", "Bo Li" ]
Poster
Large Language Model (LLM) agents equipped with external tools have become increasingly powerful for complex tasks such as web shopping, automated email replies, and financial trading. However, these advancements amplify the risks of adversarial attacks, especially when agents can access sensitive external functionalit...
adversarial attack; llm agent; jailbreak; reasoning and acting
In this work, we present UDora, a unified red teaming framework designed for LLM Agents that dynamically leverages the agent's own reasoning processes to compel it toward malicious behavior.
15,515
2503.01908
[ -0.00487239845097065, -0.03622427582740784, -0.030883710831403732, 0.027615897357463837, 0.04357313737273216, 0.014253555797040462, 0.034712620079517365, 0.013977205380797386, -0.019996093586087227, -0.005373722407966852, -0.02393997274339199, 0.04934688284993172, -0.08576692640781403, -0....
https://github.com/AI-secure/UDora
220
Learning Initial Basis Selection for Linear Programming via Duality-Inspired Tripartite Graph Representation and Comprehensive Supervision
https://openreview.net/forum?id=WtD8EIzkmm
[ "Anqi Lu", "Junchi Yan" ]
Poster
For the fundamental linear programming (LP) problems, the simplex method remains popular, which usually requires an appropriate initial basis as a warm start to accelerate the solving process. Predicting an initial basis close to an optimal one can often accelerate the solver, but a closer initial basis does not always...
supervised learning, graph neural network, linear programming, simplex method, initial basis
null
15,510
null
[ -0.03149230405688286, -0.0034419342409819365, -0.009567049331963062, 0.035391226410865784, 0.04148150235414505, 0.0604793019592762, -0.006885899696499109, -0.005682915914803743, -0.02110571414232254, -0.05286811664700508, -0.006739601492881775, -0.00916176289319992, -0.08670935779809952, 0...
https://github.com/HAHHHD/TripartiteLP
221
PDUDT: Provable Decentralized Unlearning under Dynamic Topologies
https://openreview.net/forum?id=K0Vg8b7nyI
[ "Jing Qiao", "Yu Liu", "Zengzhe Chen", "Mingyi Li", "YUAN YUAN", "Xiao Zhang", "Dongxiao Yu" ]
Poster
This paper investigates decentralized unlearning, aiming to eliminate the impact of a specific client on the whole decentralized system. However, decentralized communication characterizations pose new challenges for effective unlearning: the indirect connections make it difficult to trace the specific client's impact, ...
Decentralized unlearning
null
15,498
null
[ -0.019903700798749924, -0.07031936943531036, -0.016884703189134598, 0.07382563501596451, 0.033082015812397, 0.007939278148114681, 0.026380205526947975, 0.019203491508960724, 0.0041456217877566814, -0.02828523889183998, 0.004015755373984575, 0.0036989059299230576, -0.07247740030288696, -0.0...
null
222
Rethinking Score Distilling Sampling for 3D Editing and Generation
https://openreview.net/forum?id=1dZgzGTZEO
[ "Xingyu Miao", "Haoran Duan", "Yang Long", "Jungong Han" ]
Poster
Score Distillation Sampling (SDS) has emerged as a prominent method for text-to-3D generation by leveraging the strengths of 2D diffusion models. However, SDS is limited to generation tasks and lacks the capability to edit existing 3D assets. Conversely, variants of SDS that introduce editing capabilities often can not...
Score Distilling Sampling; 3D Editing and Generation
null
15,488
2505.01888
[ 0.005635058507323265, -0.0029200834687799215, -0.008164610713720322, 0.08036134392023087, 0.05609980225563049, 0.018791548907756805, 0.029988478869199753, -0.009263935498893261, -0.010288037359714508, -0.0765947476029396, -0.003673180239275098, -0.0209401473402977, -0.052530646324157715, 0...
null
223
Latent Action Learning Requires Supervision in the Presence of Distractors
https://openreview.net/forum?id=2gcEQCT7QW
[ "Alexander Nikulin", "Ilya Zisman", "Denis Tarasov", "Lyubaykin Nikita", "Andrei Polubarov", "Igor Kiselev", "Vladislav Kurenkov" ]
Poster
Recently, latent action learning, pioneered by Latent Action Policies (LAPO), have shown remarkable pre-training efficiency on observation-only data, offering potential for leveraging vast amounts of video available on the web for embodied AI. However, prior work has focused on distractor-free data, where changes betwe...
latent action learning, imitation learning, learning from observations, learning from videos, latent action model
We empirically investigate the effect of distractors on latent action learning.
15,478
2502.00379
[ 0.024595238268375397, -0.014991912059485912, -0.03594522178173065, 0.03443398326635361, 0.013215107843279839, -0.010092098265886307, 0.023681337013840675, 0.01589823141694069, -0.039827410131692886, -0.01653546281158924, -0.024456394836306572, 0.004061292856931686, -0.05336820334196091, 0....
https://github.com/dunnolab/laom
224
SafeAuto: Knowledge-Enhanced Safe Autonomous Driving with Multimodal Foundation Models
https://openreview.net/forum?id=nKJGjovmZz
[ "Jiawei Zhang", "Xuan Yang", "Taiqi Wang", "Yu Yao", "Aleksandr Petiushko", "Bo Li" ]
Poster
Traditional autonomous driving systems often struggle to connect high-level reasoning with low-level control, leading to suboptimal and sometimes unsafe behaviors. Recent advances in multimodal large language models (MLLMs), which process both visual and textual data, offer an opportunity to unify perception and reason...
Autonomous Driving; Multimodal Large Language Models; Multimodal Retrieval-Augmented Generation; Probabilistic Graph Model; Markov Logic Network
We propose SafeAuto that includes a specialized PDCE loss for low-level control to improve precision and safety, and enhances high-level action prediction by integrating past driving experiences and precise traffic rules into multimodal models.
15,475
2503.00211
[ 0.005213760305196047, -0.011518342420458794, 0.025401050224900246, 0.05130336061120033, 0.050764743238687515, -0.011060907505452633, 0.05146238952875137, -0.0015825133305042982, -0.016484400257468224, -0.04182390123605728, -0.037442587316036224, 0.03134825453162193, -0.07165228575468063, -...
https://github.com/AI-secure/SafeAuto
225
Diverse Prototypical Ensembles Improve Robustness to Subpopulation Shift
https://openreview.net/forum?id=qUTiOeM57J
[ "Nguyen Nhat Minh To", "Paul F R Wilson", "Viet Nguyen", "Mohamed Harmanani", "Michael Cooper", "Fahimeh Fooladgar", "Purang Abolmaesumi", "Parvin Mousavi", "Rahul Krishnan" ]
Poster
Subpopulation shift, characterized by a disparity in subpopulation distribution between the training and target datasets, can significantly degrade the performance of machine learning models. Current solutions to subpopulation shift involve modifying empirical risk minimization with re-weighting strategies to improve g...
distribution shift, subpopulation shift, spurious correlation, class imbalance, attribute imbalance
This paper proposes a prototype-based ensemble method to improve robustness against subpopulation shifts without requiring subgroup labels, outperforming state-of-the-art methods across several benchmark datasets.
15,471
2505.23027
[ -0.01809573546051979, -0.06546562910079956, -0.022854439914226532, 0.03464064002037048, 0.05783647298812866, 0.04452626407146454, 0.03185008838772774, -0.038407836109399796, -0.015635982155799866, -0.05510551109910011, -0.02637547068297863, -0.004349595867097378, -0.09153665602207184, -0.0...
https://github.com/minhto2802/dpe4subpop
226
Privacy-Shielded Image Compression: Defending Against Exploitation from Vision-Language Pretrained Models
https://openreview.net/forum?id=olzs3zVsE7
[ "Xuelin Shen", "Jiayin Xu", "Kangsheng Yin", "Wenhan Yang" ]
Poster
The improved semantic understanding of vision-language pretrained (VLP) models has made it increasingly difficult to protect publicly posted images from being exploited by search engines and other similar tools. In this context, this paper seeks to protect users' privacy by implementing defenses at the image compressio...
Privacy Safeguard, Backdoor Attack, Vision-Language Pretrained Models, Learned Image Compression
null
15,467
2506.15201
[ 0.022884832695126534, -0.005419936031103134, -0.013491347432136536, 0.07258178293704987, 0.05688606575131416, 0.028419140726327896, 0.03386404365301132, -0.02029281109571457, -0.038468990474939346, -0.03621383756399155, -0.05382451415061951, -0.021460041403770447, -0.03701009973883629, -0....
https://github.com/JiayinXu5499/PSIC
227
HYGMA: Hypergraph Coordination Networks with Dynamic Grouping for Multi-Agent Reinforcement Learning
https://openreview.net/forum?id=mgJkeqc685
[ "Chiqiang Liu", "Dazi Li" ]
Poster
Cooperative multi-agent reinforcement learning faces significant challenges in effectively organizing agent relationships and facilitating information exchange, particularly when agents need to adapt their coordination patterns dynamically. This paper presents a novel framework that integrates dynamic spectral clusteri...
Multi-Agent Reinforcement Learning, Hypergraph Convolution, Dynamic Grouping, Multi-Agent Cooperation
HYGMA integrates dynamic spectral clustering with hypergraph neural networks to enable adaptive agent grouping and efficient information processing in multi-agent reinforcement learning, outperforming state-of-the-art approaches in cooperative tasks.
15,465
2505.07207
[ -0.0037088491953909397, 0.003584406804293394, 0.019160190597176552, 0.05862197279930115, 0.016468586400151253, -0.00016385276103392243, 0.018182100728154182, -0.005229447036981583, -0.030833782628178596, -0.051139429211616516, -0.014407861977815628, 0.008009488694369793, -0.08855058997869492...
https://github.com/mysteryelder/HYGMA
228
How Distributed Collaboration Influences the Diffusion Model Training? A Theoretical Perspective
https://openreview.net/forum?id=dzGtPrqORu
[ "Jing Qiao", "Yu Liu", "YUAN YUAN", "Xiao Zhang", "Zhipeng Cai", "Dongxiao Yu" ]
Poster
This paper examines the theoretical performance of distributed diffusion models in environments where computational resources and data availability vary significantly among workers. Traditional models centered on single-worker scenarios fall short in such distributed settings, particularly when some workers are resourc...
distributed diffusion models, generation error bound
null
15,445
null
[ -0.014420753344893456, -0.017693623900413513, -0.0169332567602396, 0.07899497449398041, 0.06300145387649536, 0.027810631319880486, 0.033914025872945786, -0.021596591919660568, -0.01033980492502451, -0.0449928343296051, 0.004583635833114386, -0.019131610170006752, -0.054181184619665146, 0.0...
null
229
A Near Linear Query Lower Bound for Submodular Maximization
https://openreview.net/forum?id=LCFPWXymVt
[ "Binghui Peng", "Aviad Rubinstein" ]
Poster
We revisit the problem of selecting $k$-out-of-$n$ elements with the goal of optimizing an objective function, and ask whether it can be solved approximately with sublinear query complexity. For objective functions that are monotone submodular, [Li, Feldman, Kazemi, Karbasi, NeurIPS'22; Kuhnle, AISTATS'21] gave an $\Om...
Submodular maximization, sublinear algorithm
null
15,435
null
[ -0.03744608163833618, -0.015546050854027271, 0.004941099788993597, 0.02511967346072197, 0.05911744013428688, 0.026402516290545464, 0.02007931098341942, -0.01546595897525549, -0.016998732462525368, -0.031082211062312126, -0.016284655779600143, -0.0060112145729362965, -0.06934171169996262, -...
null
230
Local Pan-privacy for Federated Analytics
https://openreview.net/forum?id=M18dhHTFf8
[ "Vitaly Feldman", "Audra McMillan", "Guy N. Rothblum", "Kunal Talwar" ]
Poster
Pan-privacy was proposed by Dwork et al. (2010) as an approach to designing a private analytics system that retains its privacy properties in the face of intrusions that expose the system's internal state. Motivated by Federated telemetry applications, we study {\em local pan-privacy}, where privacy should be retained ...
Differential Privacy, Pan privacy
Maintaining differential privacy under intrusion on the local state.
15,432
2503.11850
[ 0.0018895798129960895, -0.01078109536319971, -0.00513251218944788, 0.046938929706811905, 0.053909026086330414, 0.006037652492523193, 0.028752533718943596, -0.019468750804662704, -0.023981940001249313, -0.023284384980797768, 0.016812968999147415, -0.024724023416638374, -0.04344760626554489, ...
null
231
Curvature Enhanced Data Augmentation for Regression
https://openreview.net/forum?id=l1sx5KiM7Z
[ "Ilya Kaufman", "Omri Azencot" ]
Poster
Deep learning models with a large number of parameters, often referred to as over-parameterized models, have achieved exceptional performance across various tasks. Despite concerns about overfitting, these models frequently generalize well to unseen data, thanks to effective regularization techniques, with data augment...
Manifold learning, Data augmentation, Regression
null
15,429
2506.06853
[ 0.0037505291402339935, -0.0357835628092289, -0.012320016510784626, 0.04692582041025162, 0.038510408252477646, 0.06323835998773575, 0.022522859275341034, -0.01331841666251421, -0.02323705516755581, -0.07089360058307648, -0.023693256080150604, 0.006957434117794037, -0.0801725909113884, 0.045...
https://github.com/azencot-group/CEMS
232
AdvAgent: Controllable Blackbox Red-teaming on Web Agents
https://openreview.net/forum?id=bwidSkOyWF
[ "Chejian Xu", "Mintong Kang", "Jiawei Zhang", "Zeyi Liao", "Lingbo Mo", "Mengqi Yuan", "Huan Sun", "Bo Li" ]
Poster
Foundation model-based agents are increasingly used to automate complex tasks, enhancing efficiency and productivity. However, their access to sensitive resources and autonomous decision-making also introduce significant security risks, where successful attacks could lead to severe consequences. To systematically uncov...
Web Agents, Large Language Models, Prompt Injection Attack
We introduce AdvAgent, a black-box red-teaming framework targeting generalist web agents, leveraging the feedback from target agents to generate adaptive adversarial prompts.
15,419
2410.17401
[ -0.004057995043694973, -0.04316529631614685, -0.0170346200466156, 0.05633145198225975, 0.014624637551605701, 0.012380569241940975, 0.04370157793164253, -0.013315709307789803, -0.016070948913693428, -0.04670584574341774, -0.018159523606300354, 0.013202286325395107, -0.08838412165641785, -0....
null
233
The Surprising Effectiveness of Test-Time Training for Few-Shot Learning
https://openreview.net/forum?id=asgBo3FNdg
[ "Ekin Akyürek", "Mehul Damani", "Adam Zweiger", "Linlu Qiu", "Han Guo", "Jyothish Pari", "Yoon Kim", "Jacob Andreas" ]
Poster
Language models (LMs) have shown impressive performance on tasks within their training distribution, but often struggle with structurally novel tasks even when given a small number of in-context task examples. We investigate the effectiveness of test-time training (TTT)—temporarily updating model parameters during infe...
Test-Time Training, In-Context Learning, Few-Shot Learning, ARC, BIG-Bench Hard
Test-time training on in-context examples outperforms standard few-shot learning, boosting LM abstract reasoning and adaptability.
15,418
2411.07279
[ 0.0048421756364405155, -0.016723711043596268, -0.03152741119265556, 0.05397941172122955, 0.03859638422727585, 0.03414597734808922, 0.055504683405160904, 0.03875969722867012, -0.008342798799276352, -0.00026521997642703354, -0.01807994209229946, 0.05512349307537079, -0.04553937539458275, -0....
https://github.com/ekinakyurek
234
On the Local Complexity of Linear Regions in Deep ReLU Networks
https://openreview.net/forum?id=id2CfAgEAk
[ "Niket Nikul Patel", "Guido Montufar" ]
Poster
We define the *local complexity* of a neural network with continuous piecewise linear activations as a measure of the density of linear regions over an input data distribution. We show theoretically that ReLU networks that learn low-dimensional feature representations have a lower local complexity. This allows us to co...
Linear Regions, Adversarial Robustness, Implicit Bias, Representation Learning
null
15,410
2412.18283
[ -0.024319861084222794, -0.009944414719939232, 0.01449013501405716, 0.052726443856954575, 0.03818432241678238, 0.05883820354938507, 0.027857139706611633, -0.02001851238310337, -0.02455933205783367, -0.023864299058914185, -0.009190228767693043, 0.00408957852050662, -0.06026483699679375, 0.01...
null
235
TOPLOC: A Locality Sensitive Hashing Scheme for Trustless Verifiable Inference
https://openreview.net/forum?id=8PJmKfeDdp
[ "Jack Min Ong", "Matthew Di Ferrante", "Aaron Pazdera", "Ryan Garner", "Sami Jaghouar", "Manveer Basra", "Max Ryabinin", "Johannes Hagemann" ]
Poster
Large language models (LLMs) have proven to be very capable, but access to frontier models currently relies on inference providers. This introduces trust challenges: how can we be sure that the provider is using the model configuration they claim? We propose TOPLOC, a novel method for verifiable inference that addresse...
Locality Sensitive Hashing, Large Language Models, Verifiable Computing
The paper describes a method to verify LLM inference performed by untrusted compute providers
15,401
2501.16007
[ -0.02694658748805523, -0.003820720361545682, -0.029943905770778656, 0.08381243050098419, 0.043417785316705704, 0.009019085206091404, 0.018754348158836365, 0.013711702078580856, -0.014232954941689968, -0.007278821896761656, -0.011875320225954056, -0.005622358527034521, -0.06327517330646515, ...
https://github.com/PrimeIntellect-ai/toploc
236
Exploring Invariance in Images through One-way Wave Equations
https://openreview.net/forum?id=HdogAuhlD5
[ "Yinpeng Chen", "Dongdong Chen", "Xiyang Dai", "Mengchen Liu", "Yinan Feng", "Youzuo Lin", "Lu Yuan", "Zicheng Liu" ]
Poster
In this paper, we empirically demonstrate that natural images can be reconstructed with high fidelity from compressed representations using a simple first-order norm-plus-linear autoregressive (FINOLA) process—without relying on explicit positional information. Through systematic analysis, we observe that the learned c...
Mathematical invariance in images, One-way wave equation, Auto-regression
null
15,400
2310.12976
[ 0.00894407369196415, -0.028622575104236603, 0.009685324504971504, 0.012441477738320827, 0.061379119753837585, -0.002329236827790737, 0.017986783757805824, 0.014281987212598324, -0.04655911400914192, -0.06784284859895706, -0.014952718280255795, -0.00724800443276763, -0.05163256824016571, -0...
null
237
Unifying Knowledge from Diverse Datasets to Enhance Spatial-Temporal Modeling: A Granularity-Adaptive Geographical Embedding Approach
https://openreview.net/forum?id=uPVynwZxch
[ "Zhigaoyuan Wang", "Ying Sun", "Hengshu Zhu" ]
Poster
Spatio-temporal forecasting provides potential for discovering evolutionary patterns in geographical scientific data. However, geographical scientific datasets are often manually collected across studies, resulting in limited time spans and data scales. This hinders existing methods that rely on rich historical data fo...
Geographical Modeling
null
15,394
null
[ 0.01906815730035305, -0.06558549404144287, 0.017206352204084396, 0.021815724670886993, 0.027439378201961517, 0.03331165388226509, 0.04245388135313988, 0.009215950034558773, -0.03137598931789398, -0.04410576447844505, -0.015443806536495686, -0.04009437933564186, -0.07609732449054718, 0.0247...
null
238
DiLQR: Differentiable Iterative Linear Quadratic Regulator via Implicit Differentiation
https://openreview.net/forum?id=m2EfTrbv4o
[ "Shuyuan Wang", "Philip D Loewen", "Michael Forbes", "Bhushan Gopaluni", "Wei Pan" ]
Poster
While differentiable control has emerged as a powerful paradigm combining model-free flexibility with model-based efficiency, the iterative Linear Quadratic Regulator (iLQR) remains underexplored as a differentiable component. The scalability of differentiating through extended iterations and horizons poses significant...
iLQR; Differentiable control; Learning based control
null
15,377
2506.17473
[ -0.02925415337085724, -0.031033847481012344, -0.009942539967596531, 0.02321598492562771, 0.025658298283815384, 0.02638191729784012, 0.0026971225161105394, -0.019319886341691017, -0.03535069152712822, -0.012627081014215946, -0.007270140573382378, -0.0018301005475223064, -0.06425082683563232, ...
null
239
The Disparate Benefits of Deep Ensembles
https://openreview.net/forum?id=tjPxZiqeHB
[ "Kajetan Schweighofer", "Adrian Arnaiz-Rodriguez", "Sepp Hochreiter", "Nuria M Oliver" ]
Poster
Ensembles of Deep Neural Networks, Deep Ensembles, are widely used as a simple way to boost predictive performance. However, their impact on algorithmic fairness is not well understood yet. Algorithmic fairness examines how a model's performance varies across socially relevant groups defined by protected attributes suc...
Deep Ensembles, Disparate Benefits, Predictive Diversity, Algorithmic Fairness, Post-Processing
We uncover the disparate benefits effect of Deep Ensembles, analyze its cause and evaluate approaches to mitigate its negative fairness implications.
15,371
2410.13831
[ 0.00018132787954527885, -0.02639247290790081, -0.03601256385445595, 0.05086002126336098, 0.026006193831562996, 0.008289736695587635, 0.03476596623659134, 0.010591892525553703, -0.021767927333712578, -0.04715650528669357, -0.0016378109576180577, 0.0027069798670709133, -0.0770428478717804, -...
https://github.com/ml-jku/disparate-benefits
240
Power Mean Estimation in Stochastic Continuous Monte-Carlo Tree Search
https://openreview.net/forum?id=LL8R2QUEvB
[ "Tuan Quang Dam" ]
Poster
Monte-Carlo Tree Search (MCTS) has demonstrated success in online planning for deterministic environments, yet significant challenges remain in adapting it to stochastic Markov Decision Processes (MDPs), particularly in continuous state-action spaces. Existing methods, such as HOOT, which combines MCTS with the Hierarc...
Monte-Carlo Tree Search; Continuous Reinforcement Learning Planning
We develop a theoretically-guaranteed planning algorithm that enables making optimal continuous decisions in unpredictable environments, outperforming existing methods on robotic control tasks.
15,370
null
[ -0.045309219509363174, 0.011611401103436947, -0.00859194528311491, 0.06207592412829399, 0.03929702565073967, 0.018930451944470406, 0.043383289128541946, 0.01774333231151104, -0.0292503722012043, -0.04148048907518387, 0.006753379479050636, -0.011291849426925182, -0.053840044885873795, -0.05...
null
241
Provable In-Context Vector Arithmetic via Retrieving Task Concepts
https://openreview.net/forum?id=DbUmeNnNpt
[ "Dake Bu", "Wei Huang", "Andi Han", "Atsushi Nitanda", "Qingfu Zhang", "Hau-San Wong", "Taiji Suzuki" ]
Poster
In-context learning (ICL) has garnered significant attention for its ability to grasp functions/tasks from demonstrations. Recent studies suggest the presence of a latent **task/function vector** in LLMs during ICL. Merullo et al. (2024) showed that LLMs leverage this vector alongside the residual stream for Word2Vec-l...
Task Vector; Residual Transformer; Factual-recall In-Context Learning;
null
15,363
null
[ -0.0046827225014567375, -0.005974166095256805, 0.005031627602875233, 0.03147200122475624, 0.027638224884867668, -0.006756161339581013, 0.024400100111961365, 0.033192235976457596, -0.028640417382121086, 0.01909536123275757, -0.010475462302565575, 0.018503472208976746, -0.022334590554237366, ...
null
242
Contextual Bandits for Unbounded Context Distributions
https://openreview.net/forum?id=gGY9TNVYs3
[ "Puning Zhao", "Rongfei Fan", "Shaowei Wang", "Li Shen", "Qixin Zhang", "ZongKe", "Tianhang Zheng" ]
Poster
Nonparametric contextual bandit is an important model of sequential decision making problems. Under $\alpha$-Tsybakov margin condition, existing research has established a regret bound of $\tilde{O}\left(T^{1-\frac{\alpha+1}{d+2}}\right)$ for bounded supports. However, the optimal regret with unbounded contexts has not...
Contextual bandits, nonparametric statistics
Analyze nonparametric contextual bandits
15,332
2408.09655
[ -0.02833704650402069, 0.008174339309334755, -0.005018602125346661, 0.027240149676799774, 0.04782680794596672, 0.022488946095108986, 0.012690352275967598, 0.017332550138235092, -0.02215956524014473, -0.05115613713860512, -0.02908807434141636, 0.01879587210714817, -0.06074992194771767, -0.02...
null
243
PiD: Generalized AI-Generated Images Detection with Pixelwise Decomposition Residuals
https://openreview.net/forum?id=gye2zYytx6
[ "Xinghe Fu", "Zhiyuan Yan", "Zheng Yang", "Taiping Yao", "Yandan Zhao", "Shouhong Ding", "Xi Li" ]
Poster
Fake images, created by recently advanced generative models, have become increasingly indistinguishable from real ones, making their detection crucial, urgent, and challenging. This paper introduces PiD (Pixelwise Decomposition Residuals), a novel detection method that focuses on residual signals within images. Generat...
AIGC detection, Generative models, Deepfakes, Deep learning, Image representation, Low-level vision
A generalizable AI-generated image detection method with pixelwise decomposition residuals is proposed.
15,311
null
[ 0.003968869335949421, -0.010852504521608353, -0.013845076784491539, 0.06593881547451019, 0.044047437608242035, 0.019579973071813583, 0.006883047521114349, 0.010325144976377487, -0.03319242224097252, -0.03752819821238518, -0.013760755769908428, -0.017408283427357674, -0.0629124566912651, 0....
null
244
CAT: Contrastive Adversarial Training for Evaluating the Robustness of Protective Perturbations in Latent Diffusion Models
https://openreview.net/forum?id=5of0l7eUau
[ "Sen Peng", "Mingyue Wang", "Jianfei He", "Jijia Yang", "Xiaohua Jia" ]
Poster
Latent diffusion models have recently demonstrated superior capabilities in many downstream image synthesis tasks. However, customization of latent diffusion models using unauthorized data can severely compromise the privacy and intellectual property rights of data owners. Adversarial examples as protective perturbati...
Latent Diffusion Models, Protective Perturbations, Adversarial Training
We reveal the role of latent representation distortion in protective perturbations and propose Contrastive Adversarial Training, an adaptive attack that exposes their robustness weaknesses.
15,307
2502.07225
[ 0.011588610708713531, -0.05817989259958267, -0.008471003733575344, 0.05671507120132446, 0.038728222250938416, 0.010787340812385082, 0.0368683859705925, -0.011472752317786217, -0.025548817589879036, -0.05154909938573837, -0.017306966707110405, -0.031110256910324097, -0.03321215137839317, 0....
https://github.com/senp98/CAT
245
Online Robust Reinforcement Learning Through Monte-Carlo Planning
https://openreview.net/forum?id=m25ma7O7Ec
[ "Tuan Quang Dam", "Kishan Panaganti", "Brahim Driss", "Adam Wierman" ]
Poster
Monte Carlo Tree Search (MCTS) is a powerful framework for solving complex decision-making problems, yet it often relies on the assumption that the simulator and the real-world dynamics are identical. Although this assumption helps achieve the success of MCTS in games like Chess, Go, and Shogi, the real-world scenarios...
Monte-carlo tree search, distributionally robust reinforcement learning, online reinforcement learning
we provide monte-carlo tree search based planning algorithm for online robust RL problem
15,297
null
[ -0.04715589061379433, -0.015029387548565865, 0.005129767581820488, 0.05790961906313896, 0.035590481013059616, 0.021704502403736115, 0.005689017008990049, -0.007997495122253895, -0.014106960967183113, -0.04772832989692688, -0.019486771896481514, -0.0022036321461200714, -0.04281529411673546, ...
https://github.com/brahimdriss/RobustMCTS
246
Learning Condensed Graph via Differentiable Atom Mapping for Reaction Yield Prediction
https://openreview.net/forum?id=sqjQ6p56GR
[ "Ankit Ghosh", "Gargee Kashyap", "Sarthak Mittal", "Nupur Jain", "Raghavan B Sunoj", "Abir De" ]
Poster
Yield of chemical reactions generally depends on the activation barrier, i.e., the energy difference between the reactant and the transition state. Computing the transition state from the reactant and product graphs requires prior knowledge of the correct node alignment (i.e., atom mapping), which is not available in y...
Yield Prediction, GNN, Atom Mapping
Learns yield by approximating atom mapping and then a condensed graph of reaction (CGR), where CGR serves as a surrogate of the transition state.
15,291
null
[ -0.015712976455688477, 0.015264744870364666, -0.009977281093597412, 0.03495774045586586, 0.044121161103248596, 0.004845844116061926, 0.02112073078751564, -0.0010688209440559149, 0.013011074624955654, -0.023473892360925674, 0.04162881523370743, -0.01031038910150528, -0.02567058615386486, 0....
null
247
What Limits Bidirectional Model's Generative Capabilities? A Uni-Bi-Directional Mixture-of-Expert Method For Bidirectional Fine-tuning
https://openreview.net/forum?id=kPqvx2mvec
[ "Zuchao Li", "Yonghua Hei", "Qiwei Li", "Lefei Zhang", "Ping Wang", "hai zhao", "Baoyuan Qi", "Liu Guoming" ]
Poster
Large Language Models (LLMs) excel in generation tasks, yet their causal attention mechanisms limit performance in embedding tasks. While bidirectional modeling may enhance embeddings, naively fine-tuning unidirectional models bidirectionally severely degrades generative performance. To investigate this trade-off, we a...
Large Language Model, Bidirectional Modeling, Causal Attention
We demonstrate that bidirectional training leads to an increase in subsequent dependency, and propose A Uni-Bi-Directional Mixture-of-Expert Large Language Model.
15,290
null
[ -0.01680753566324711, -0.03712350130081177, 0.014558413065969944, -0.02239079773426056, 0.031781937927007675, -0.006012467201799154, 0.040063269436359406, 0.022608252242207527, -0.02282746322453022, -0.017281366512179375, -0.0019646992441266775, 0.024889761582016945, -0.046230074018239975, ...
null
248
The Noisy Laplacian: a Threshold Phenomenon for Non-Linear Dimension Reduction
https://openreview.net/forum?id=GK6q2SFNHm
[ "Alex Kokot", "Octavian-Vlad Murad", "Marina Meila" ]
Poster
In this paper, we clarify the effect of noise on common spectrally motivated algorithms such as Diffusion Maps (DM) for dimension reduction. Empirically, these methods are much more robust to noise than current work suggests. Specifically, existing consistency results require that either the noise amplitude or dimensio...
Dimension Reduction, Denoising, Diffusion Maps, Laplacian, VAE, LTSA
We show that manifold spectral data can be recovered from noisy samples up to a noise dependent threshold, and investigate the performance of other unsupervised learning algorithms in this setting.
15,271
null
[ -0.03417928144335747, -0.00677854660898447, 0.03436654433608055, 0.022498127073049545, 0.011056466959416866, 0.055193815380334854, 0.0463578924536705, -0.002640977967530489, -0.03316017612814903, -0.08729260414838791, -0.0022017185110598803, -0.021910127252340317, -0.06722469627857208, 0.0...
null
249
GTR: A General, Multi-View, and Dynamic Framework for Trajectory Representation Learning
https://openreview.net/forum?id=ehcWKZ2nEn
[ "Xiangheng Wang", "Ziquan Fang", "Chenglong Huang", "Danlei Hu", "Lu Chen", "Yunjun Gao" ]
Poster
Trajectory representation learning aims to transform raw trajectory data into compact and low-dimensional vectors that are suitable for downstream analysis. However, most existing methods adopt either a free-space view or a road-network view during the learning process, which limits their ability to capture the complex...
Trajectory representation learning, mobility learning, spatio-temporal learning
null
15,229
null
[ 0.009459881111979485, -0.030777977779507637, 0.0011894844938069582, 0.04611543193459511, 0.013088696636259556, 0.033424362540245056, 0.029689423739910126, 0.03513514623045921, -0.018826795741915703, -0.028254283592104912, -0.010012038983404636, -0.008849548175930977, -0.07710908353328705, ...
https://github.com/ZJU-DAILY/GTR
250
µnit Scaling: Simple and Scalable FP8 LLM Training
https://openreview.net/forum?id=qOLjAhxZgm
[ "Saaketh Narayan", "Abhay Gupta", "Mansheej Paul", "Davis Blalock" ]
Poster
Large language model training with 8-bit floating point (FP8) formats promises significant efficiency improvements, but reduced numerical precision makes training challenging. It is currently possible to train in FP8 only if one is willing to tune various hyperparameters, reduce model scale, or accept the overhead of c...
LLM, FP8, Transformer, Model Training, Attention
μnit Scaling combines stable, highly efficient training via FP8 with significant cost savings through hyperparameter transfer.
15,192
null
[ -0.041980937123298645, -0.039302337914705276, 0.0033224523067474365, 0.040161795914173126, 0.038896575570106506, 0.05799613893032074, 0.01728782430291176, 0.0003747877199202776, -0.036163777112960815, -0.003942061681300402, 0.029993539676070213, 0.0006530038081109524, -0.06087905541062355, ...
null
251
Contextual Optimization Under Model Misspecification: A Tractable and Generalizable Approach
https://openreview.net/forum?id=e3NNvqD7wA
[ "Omar Bennouna", "Jiawei Zhang", "Saurabh Amin", "Asuman E. Ozdaglar" ]
Poster
Contextual optimization problems are prevalent in decision-making applications where historical data and contextual features are used to learn predictive models that inform optimal actions. However, practical applications often suffer from model misspecification due to incomplete knowledge of the underlying data-genera...
Contextual Optimization, Machine Learning, Constrained Optimization
null
15,188
null
[ -0.03939438611268997, 0.0036185416392982006, 0.0027236121241003275, 0.0437781997025013, 0.03977510705590248, 0.05454518646001816, 0.01087665930390358, -0.00905320793390274, -0.030723776668310165, -0.02932407706975937, -0.01900520920753479, 0.028949350118637085, -0.055945370346307755, -0.01...
null
252
Linear Transformers as VAR Models: Aligning Autoregressive Attention Mechanisms with Autoregressive Forecasting
https://openreview.net/forum?id=SxJUV9mnyt
[ "Jiecheng Lu", "Shihao Yang" ]
Poster
Autoregressive attention-based time series forecasting (TSF) has drawn increasing interest, with mechanisms like linear attention often outperforming vanilla attention. However, deeper Transformer architectures frequently misalign with autoregressive objectives, obscuring the underlying VAR structure embedded within li...
Linear Attention, Transformer, Time Series Forecasting, Vector Autoregression
null
15,182
2502.07244
[ 0.016864804551005363, -0.017051301896572113, 0.013551748357713223, 0.017473334446549416, 0.014250029809772968, 0.060590993613004684, 0.05122441053390503, 0.019746901467442513, -0.046866271644830704, -0.04697643965482712, -0.0024512920062988997, 0.005393642466515303, -0.08562891185283661, 0...
https://github.com/LJC-FVNR/Structural-Aligned-Mixture-of-VAR
253
NextCoder: Robust Adaptation of Code LMs to Diverse Code Edits
https://openreview.net/forum?id=3B6fF1PxYD
[ "Tushar Aggarwal", "Swayam Singh", "Abhijeet Awasthi", "Aditya Kanade", "Nagarajan Natarajan" ]
Poster
Software engineering activities frequently involve edits to existing code. However, contemporary code language models (LMs) lack the ability to handle diverse types of code-edit requirements. In this work, we attempt to overcome this shortcoming through (1) a novel synthetic data generation pipeline and (2) a robust mo...
Code-LMs, code-editing, code-generation, software engineering
This paper introduce a synthetic data generation pipeline and a robust model adaptation algorithm to train models for diverse code-editing tasks without losing their original code generation abilities.
15,174
null
[ 0.013036087155342102, -0.05108296498656273, -0.03096916526556015, 0.043774720281362534, 0.053639404475688934, 0.06130753830075264, 0.02090025134384632, 0.018657898530364037, -0.02523958496749401, -0.041270799934864044, -0.033804751932621, 0.006159566808491945, -0.07053753733634949, -0.0177...
null
254
O-MAPL: Offline Multi-agent Preference Learning
https://openreview.net/forum?id=FYvrNKYu6H
[ "The Viet Bui", "Tien Anh Mai", "Thanh Hong Nguyen" ]
Poster
Inferring reward functions from demonstrations is a key challenge in reinforcement learning (RL), particularly in multi-agent RL (MARL). The large joint state-action spaces and intricate inter-agent interactions in MARL make inferring the joint reward function especially challenging. While prior studies in single-agent...
Multi-agent Reinforcement Learning, Preference Learning
Offline Multi-agent Preference Learning
15,169
null
[ -0.03964730724692345, -0.018264278769493103, 0.012882508337497711, 0.02351306565105915, 0.039268068969249725, 0.01414145901799202, -0.00040311916382052004, 0.020084315910935402, -0.03239241987466812, -0.024039218202233315, -0.028713900595903397, 0.06468649208545685, -0.09737510234117508, -...
null
255
RoSTE: An Efficient Quantization-Aware Supervised Fine-Tuning Approach for Large Language Models
https://openreview.net/forum?id=h30EzoI3s0
[ "Quan Wei", "Chung-Yiu Yau", "Hoi To Wai", "Yang Zhao", "Dongyeop Kang", "Youngsuk Park", "Mingyi Hong" ]
Poster
Supervised fine-tuning is a standard method for adapting pre-trained large language models (LLMs) to downstream tasks. Quantization has been recently studied as a post-training technique for efficient LLM deployment. To obtain quantized fine-tuned LLMs, conventional pipelines would first fine-tune the pre-trained model...
model quantization, quantization-aware training, fine-tuning, large language models
null
15,164
2502.09003
[ -0.028395378962159157, -0.02996387518942356, -0.016977611929178238, 0.04532083496451378, 0.0453043133020401, 0.030075490474700928, 0.01239741686731577, 0.021752983331680298, -0.00394711596891284, 0.014299081638455391, -0.031023476272821426, 0.03399936854839325, -0.0706244707107544, -0.0199...
https://github.com/OptimAI-Lab/RoSTE
256
Reflection-Bench: Evaluating Epistemic Agency in Large Language Models
https://openreview.net/forum?id=eff38SdyvN
[ "Lingyu Li", "Yixu Wang", "Haiquan Zhao", "Shuqi Kong", "Yan Teng", "Chunbo Li", "Yingchun Wang" ]
Poster
With large language models (LLMs) increasingly deployed as cognitive engines for AI agents, the reliability and effectiveness critically hinge on their intrinsic epistemic agency, which remains understudied. Epistemic agency, the ability to flexibly construct, adapt, and monitor beliefs about dynamic environments, repr...
large language models, autonomous agent, cognitive psychology
Reflection-Bench is a cognitive psychology inspired, parameterized, and contamination-minimization benchmark that evaluates LLMs' intrinsic epistemic agency, the capacity to flexibly construct, adapt, and monitor beliefs about dynamic environments.
15,131
null
[ -0.00047057229676283896, 0.010184397920966148, -0.022348439320921898, 0.023166023194789886, 0.015380965545773506, 0.01670852303504944, 0.04781730845570564, 0.01709434576332569, -0.03151136264204979, -0.0004590447642840445, -0.011170191690325737, 0.033053215593099594, -0.048034027218818665, ...
https://github.com/AI45Lab/ReflectionBench
257
COGNATE: Acceleration of Sparse Tensor Programs on Emerging Hardware using Transfer Learning
https://openreview.net/forum?id=EV0itGFjmm
[ "Chamika Sudusinghe", "Gerasimos Gerogiannis", "Damitha Lenadora", "Charles Block", "Josep Torrellas", "Charith Mendis" ]
Poster
Sparse tensor programs are essential in deep learning and graph analytics, driving the need for optimized processing. To meet this demand, specialized hardware accelerators are being developed. Optimizing these programs for accelerators is challenging for two reasons: program performance is highly sensitive to variatio...
machine learning for systems, transfer learning, hardware accelerators, learned cost models, sparse tensor programs
A novel framework that leverages the homogeneity of input features across hardware platforms while mitigating heterogeneity to efficiently fine-tune learned cost models for accelerating sparse tensor programs on emerging hardware platforms.
15,114
2506.00424
[ -0.00468054274097085, -0.03130849450826645, -0.0287033598870039, 0.028118900954723358, 0.030592964962124825, 0.030420739203691483, 0.002769812475889921, 0.014495879411697388, -0.0156883392482996, -0.04540690779685974, 0.03293432295322418, -0.005033391527831554, -0.061633601784706116, 0.019...
null
258
Robot-Gated Interactive Imitation Learning with Adaptive Intervention Mechanism
https://openreview.net/forum?id=TC1sQg5z0T
[ "Haoyuan Cai", "Zhenghao Peng", "Bolei Zhou" ]
Poster
Interactive Imitation Learning (IIL) allows agents to acquire desired behaviors through human interventions, but current methods impose high cognitive demands on human supervisors. We propose the Adaptive Intervention Mechanism (AIM), a novel robot-gated IIL algorithm that learns an adaptive criterion for requesting hu...
Imitation Learning, Human-in-the-loop Reinforcement Learning, Shared Autonomy
null
15,110
2506.09176
[ -0.0010817734291777015, -0.0288461372256279, -0.007798182312399149, -0.028240302577614784, 0.0527288019657135, 0.011844788677990437, 0.03143464773893356, -0.00344292726367712, -0.043154481798410416, -0.020122602581977844, -0.050186023116111755, 0.023260923102498055, -0.06819558143615723, 0...
https://github.com/metadriverse/AIM
259
A Reasoning-Based Approach to Cryptic Crossword Clue Solving
https://openreview.net/forum?id=kBTgizDiCq
[ "Martin Andrews", "Sam Witteveen" ]
Poster
Cryptic crossword clues are challenging language tasks for which new test sets are released daily by major newspapers on a global basis. Each cryptic clue contains both the definition of the answer to be placed in the crossword grid (in common with regular crosswords), and ‘wordplay’ that *proves* that the answer is co...
NLP, Cryptic Crosswords, Reasoning, Proof/Verification
Our LLM prover/verifier for the reasoning steps in cryptic crossword clues obtains SoTA results
15,105
2506.04824
[ -0.01279480941593647, -0.003044323530048132, -0.014771473594009876, 0.025046277791261673, 0.0861818939447403, 0.02231503278017044, 0.025259939953684807, 0.0074581727385520935, -0.03707990422844887, -0.010427451692521572, -0.02086896263062954, 0.04457377642393112, -0.023435352370142937, 0.0...
https://github.com/mdda/cryptic-crossword-reasoning-verifier
260
Maximum Update Parametrization and Zero-Shot Hyperparameter Transfer for Fourier Neural Operators
https://openreview.net/forum?id=fHt4Nau7FW
[ "Shanda Li", "Shinjae Yoo", "Yiming Yang" ]
Poster
Fourier Neural Operators (FNOs) offer a principled approach for solving complex partial differential equations (PDEs). However, scaling them to handle more complex PDEs requires increasing the number of Fourier modes, which significantly expands the number of model parameters and makes hyperparameter tuning computation...
Fourier Neural Operators, Hyperparameter transfer
We introduce $\mu$Transfer-FNO, a zero-shot hyperparameter transfer method for Fourier Neural Operators that enables efficient hyperparameter tuning by leveraging a novel Maximum Update Parametrization.
15,097
null
[ -0.07230935245752335, -0.013644261285662651, 0.02920815348625183, 0.023982129991054535, 0.03192984685301781, 0.05122652277350426, 0.010983536951243877, -0.004893885459750891, -0.016828930005431175, -0.05113401263952255, 0.0032524196431040764, 0.0009498599683865905, -0.05029721558094025, 0....
https://github.com/LithiumDA/muTransfer-FNO
261
Semantics-aware Test-time Adaptation for 3D Human Pose Estimation
https://openreview.net/forum?id=pNZ3pioKRN
[ "Qiuxia Lin", "Rongyu Chen", "Kerui Gu", "Angela Yao" ]
Poster
This work highlights a semantics misalignment in 3D human pose estimation. For the task of test-time adaptation, the misalignment manifests as overly smoothed and unguided predictions. The smoothing settles predictions towards some average pose. Furthermore, when there are occlusions or truncations, the adaptation beco...
Test-time Adaptation, 3D Human Pose Estimation
null
15,096
2502.10724
[ 0.028844643384218216, -0.007207074668258429, 0.0015882847364991903, 0.03591127321124077, 0.04461640492081642, 0.04058923199772835, 0.03969884291291237, 0.018561769276857376, -0.04238273948431015, -0.02077908255159855, -0.01813679002225399, -0.01768539659678936, -0.08168381452560425, -0.029...
null
262
Revisiting Cooperative Off-Policy Multi-Agent Reinforcement Learning
https://openreview.net/forum?id=JPkJAyutW0
[ "Yueheng Li", "Guangming Xie", "Zongqing Lu" ]
Poster
Cooperative Multi-Agent Reinforcement Learning (MARL) has become a critical tool for addressing complex real-world problems. However, off-policy MARL methods, which rely on joint Q-functions, face significant scalability challenges due to the exponentially growing joint action space. In this work, we highlight a criti...
cooperative multi-agent reinforcement learning, off-policy multi-agent reinforcement learning, value factorization, extrapolation error
null
15,095
null
[ -0.04543550685048103, -0.023847317323088646, -0.01539013534784317, 0.006195840425789356, 0.03528693690896034, -0.005598659627139568, 0.023936647921800613, 0.0013957304181531072, -0.04214232414960861, -0.03189199045300484, -0.013307764194905758, 0.012559093534946442, -0.0973237082362175, -0...
null
263
Improving LLM Safety Alignment with Dual-Objective Optimization
https://openreview.net/forum?id=Kjivk5OPtL
[ "Xuandong Zhao", "Will Cai", "Tianneng Shi", "David Huang", "Licong Lin", "Song Mei", "Dawn Song" ]
Poster
Existing training-time safety alignment techniques for large language models (LLMs) remain vulnerable to jailbreak attacks. Direct preference optimization (DPO), a widely deployed alignment method, exhibits limitations in both experimental and theoretical contexts as its loss function proves suboptimal for refusal lear...
LLM Safety, Alignment, DPO
null
15,093
2503.03710
[ -0.02682673931121826, 0.007721925154328346, -0.027780205011367798, 0.04208623990416527, 0.029650606215000153, 0.037272218614816666, 0.04038134217262268, -0.01796398125588894, -0.02333131991326809, -0.00420598266646266, -0.02513132058084011, 0.015821604058146477, -0.08776749670505524, -0.01...
https://github.com/wicai24/DOOR-Alignment
264
GuardAgent: Safeguard LLM Agents via Knowledge-Enabled Reasoning
https://openreview.net/forum?id=2nBcjCZrrP
[ "Zhen Xiang", "Linzhi Zheng", "Yanjie Li", "Junyuan Hong", "Qinbin Li", "Han Xie", "Jiawei Zhang", "Zidi Xiong", "Chulin Xie", "Carl Yang", "Dawn Song", "Bo Li" ]
Poster
The rapid advancement of large language model (LLM) agents has raised new concerns regarding their safety and security. In this paper, we propose GuardAgent, the first guardrail agent to protect target agents by dynamically checking whether their actions satisfy given safety guard requests. Specifically, GuardAgent fir...
agent, large language model, guardrail, reasoning
We propose GuardAgent, the first LLM agent as a guardrail to protect other LLM agents via knowledge-enabled reasoning.
15,081
2406.09187
[ -0.0014556193491443992, -0.01200051698833704, -0.01140289381146431, 0.03492821753025055, 0.04959810525178909, -0.016601473093032837, 0.050018150359392166, -0.01548054814338684, -0.01853344775736332, -0.01498882845044136, -0.03521642088890076, 0.02510678395628929, -0.05332028493285179, -0.0...
https://github.com/guardagent/code
265
Can Diffusion Models Learn Hidden Inter-Feature Rules Behind Images?
https://openreview.net/forum?id=ERU7QgD6gc
[ "Yujin Han", "Andi Han", "Wei Huang", "Chaochao Lu", "Difan Zou" ]
Poster
Despite the remarkable success of diffusion models (DMs) in data generation, they exhibit specific failure cases with unsatisfactory outputs. We focus on one such limitation: the ability of DMs to learn hidden rules between image features. Specifically, for image data with dependent features ($\mathbf{x}$) and ($\mathb...
Diffusion Model, Deep Generative Model
null
15,070
2502.04725
[ -0.004790946375578642, -0.0026607888285070658, 0.00032563935383222997, 0.04686586931347847, 0.06677094846963882, 0.021132906898856163, 0.028807753697037697, -0.021443026140332222, -0.0351557731628418, -0.06016439571976662, 0.006153073161840439, -0.0007528936839662492, -0.03876880928874016, ...
null
266
Correlated Errors in Large Language Models
https://openreview.net/forum?id=kzYq2hfyHB
[ "Elliot Myunghoon Kim", "Avi Garg", "Kenny Peng", "Nikhil Garg" ]
Poster
Diversity in training data, architecture, and providers is assumed to mitigate homogeneity in LLMs. However, we lack empirical evidence on whether different LLMs differ \textit{meaningfully}. We conduct a large-scale empirical evaluation on over 350 LLMs overall, using two popular leaderboards and a resume-screening ta...
algorithmic monoculture, evaluations, LLMs
We evaluate the extent to which different language models are correlated in how they err.
15,069
2506.07962
[ 0.005062846466898918, -0.010521692223846912, -0.028826536610722542, 0.04273439571261406, 0.042902667075395584, 0.021554069593548775, 0.042344022542238235, 0.029650187119841576, -0.022326430305838585, 0.006086912006139755, -0.011577173136174679, 0.03232625499367714, -0.0689707100391388, -0....
https://github.com/nikhgarg/llm_correlated_errors_public
267
WAVE: Weighted Autoregressive Varying Gate for Time Series Forecasting
https://openreview.net/forum?id=Qqn5ktBUxH
[ "Jiecheng Lu", "Xu Han", "Yan Sun", "Shihao Yang" ]
Poster
We propose a Weighted Autoregressive Varying gatE (WAVE) attention mechanism equipped with both Autoregressive (AR) and Moving-average (MA) components. It can adapt to various attention mechanisms, enhancing and decoupling their ability to capture long-range and local temporal patterns in time series data. In this pape...
Attention, Transformer, Autoregressive Moving-average, Time Series Forecasting
null
15,065
2410.03159
[ -0.0050753080286085606, -0.027133943513035774, 0.01688685454428196, 0.01806839369237423, 0.010950237512588501, 0.058523669838905334, 0.037251971662044525, -0.002546260366216302, -0.039474006742239, -0.039897721260786057, -0.015552552416920662, 0.021823283284902573, -0.06056225672364235, 0....
https://github.com/ljc-fvnr/arma-attention
268
Addressing Imbalanced Domain-Incremental Learning through Dual-Balance Collaborative Experts
https://openreview.net/forum?id=dwjwvTwV3V
[ "Lan Li", "Da-Wei Zhou", "Han-Jia Ye", "De-Chuan Zhan" ]
Poster
Domain-Incremental Learning (DIL) focuses on continual learning in non-stationary environments, requiring models to adjust to evolving domains while preserving historical knowledge. DIL faces two critical challenges in the context of imbalanced data: intra-domain class imbalance and cross-domain class distribution shif...
Domain-incremental learning,Class-imbalanced learning
We propose the Dual-Balance Collaborative Experts (DCE) framework to address intra-domain class imbalance and cross-domain class distribution shifts in domain-incremental learning.
15,062
2507.07100
[ -0.005468929652124643, -0.030143991112709045, -0.012466346845030785, 0.011772030964493752, 0.021936852484941483, -0.009969529695808887, 0.02791747637093067, -0.020196987316012383, -0.004317500162869692, -0.022409869357943535, 0.003280479460954666, 0.016660267487168312, -0.06591451168060303, ...
https://github.com/Lain810/DCE
269
ReVISE: Learning to Refine at Test-Time via Intrinsic Self-Verification
https://openreview.net/forum?id=cBtsxtJqEK
[ "Hyunseok Lee", "Seunghyuk Oh", "Jaehyung Kim", "Jinwoo Shin", "Jihoon Tack" ]
Poster
Self-awareness, i.e., the ability to assess and correct one's generation, is a fundamental aspect of human intelligence, making its replication in large language models (LLMs) an important yet challenging task. Previous works tackle this by employing extensive reinforcement learning or relying on large external verifie...
Self-correct, Test-time compute, Large Language Model, Self-verify, Self-awareness
null
15,061
2502.14565
[ 0.02350390888750553, -0.0041251592338085175, -0.008601296693086624, 0.059539686888456345, 0.07398449629545212, 0.028835108503699303, 0.04051867127418518, 0.015423095785081387, -0.01835225149989128, -0.025696849450469017, 0.0225724745541811, 0.04707648977637291, -0.03155119717121124, -0.039...
https://github.com/seunghyukoh/revise
270
Large Displacement Motion Transfer with Unsupervised Anytime Interpolation
https://openreview.net/forum?id=rMCyR6VSOM
[ "Guixiang Wang", "Jianjun Li" ]
Poster
Motion transfer is to transfer pose in driving video to object of source image, so that object of source image moves. Although great progress has been made recently in unsupervised motion transfer, many unsupervised methods still struggle to accurately model large displacement motions when large motion differences occu...
Image animation, action generation, Unsupervised Interpolation
null
15,058
null
[ 0.00011868024739669636, -0.013949697837233543, -0.014408856630325317, 0.02313443087041378, 0.03794598579406738, 0.016896145418286324, 0.0018198260804638267, 0.016815876588225365, -0.0023838430643081665, -0.0454523004591465, -0.024859674274921417, -0.046595167368650436, -0.0464070700109005, ...
null
271
One Example Shown, Many Concepts Known! Counterexample-Driven Conceptual Reasoning in Mathematical LLMs
https://openreview.net/forum?id=A31Ep22iQ7
[ "Yinghui Li", "Jiayi Kuang", "Haojing Huang", "Zhikun Xu", "Xinnian Liang", "Yi Yu", "Wenlian Lu", "Yangning Li", "Xiaoyu Tan", "Chao Qu", "Ying Shen", "Hai-Tao Zheng", "Philip S. Yu" ]
Poster
Leveraging mathematical Large Language Models (LLMs) for proof generation is a fundamental topic in LLMs research. We argue that the ability of current LLMs to prove statements largely depends on whether they have encountered the relevant proof process during training. This reliance limits their deeper understanding of...
Large Language Models, Mathematical Reasoning
The first work aims to enable LLMs to perform mathematical reasoning by giving counterexamples.
15,052
2502.10454
[ -0.025455553084611893, -0.01044541411101818, -0.009988093748688698, 0.024986209347844124, 0.06738107651472092, 0.004258750006556511, 0.02668027952313423, 0.008947574533522129, -0.018909931182861328, -0.007274234667420387, -0.0007755422848276794, 0.017982976511120796, -0.0543360710144043, 0...
https://github.com/THUKElab/COUNTERMATH
272
LEAPS: A discrete neural sampler via locally equivariant networks
https://openreview.net/forum?id=Hq2RniQAET
[ "Peter Holderrieth", "Michael Samuel Albergo", "Tommi Jaakkola" ]
Poster
We propose *LEAPS*, an algorithm to sample from discrete distributions known up to normalization by learning a rate matrix of a continuous-time Markov chain (CTMC). LEAPS can be seen as a continuous-time formulation of annealed importance sampling and sequential Monte Carlo methods, extended so that the variance of the...
CTMC, Bayesian inference, MCMC, sampling
A discrete neural sampler for unnormalized densities via locally equivariant networks.
15,045
2502.10843
[ -0.005582267884165049, -0.010871191509068012, -0.019806545227766037, 0.05396373197436333, 0.030417371541261673, 0.024520179256796837, 0.005432577803730965, 0.012916740961372852, -0.006577502470463514, -0.03904719278216362, 0.02743411995470524, -0.011880558915436268, -0.06158812344074249, -...
https://github.com/malbergo/leaps
273
Neural Interpretable PDEs: Harmonizing Fourier Insights with Attention for Scalable and Interpretable Physics Discovery
https://openreview.net/forum?id=JvRoF9FRga
[ "Ning Liu", "Yue Yu" ]
Poster
Attention mechanisms have emerged as transformative tools in core AI domains such as natural language processing and computer vision. Yet, their largely untapped potential for modeling intricate physical systems presents a compelling frontier. Learning such systems often entails discovering operators that map between f...
Neural Operators, Physics Discovery, Attention Mechanism, Inverse PDE problems
null
15,041
2505.23106
[ -0.04809591919183731, -0.005375818815082312, 0.0045326631516218185, 0.03529630973935127, 0.015884222462773323, 0.03302742913365364, -0.00015271120355464518, -0.016273630782961845, -0.04337313771247864, -0.03658469021320343, -0.010338904336094856, -0.002921631094068289, -0.034316807985305786,...
https://github.com/fishmoon1234/Nonlocal-Attention-Operator
274
MuseControlLite: Multifunctional Music Generation with Lightweight Conditioners
https://openreview.net/forum?id=VK47MdCjBH
[ "Fang-Duo Tsai", "Shih-Lun Wu", "Weijaw Lee", "Sheng-Ping Yang", "Bo-Rui Chen", "Hao-Chung Cheng", "Yi-Hsuan Yang" ]
Poster
We propose MuseControlLite, a lightweight mechanism designed to fine-tune text-to-music generation models for precise conditioning using various time-varying musical attributes and reference audio signals. The key finding is that positional embeddings, which have been seldom used by text-to-music generation models in t...
Sound, Artificial Intelligence, Machine Learning, Audio and Speech Processing
A framework that employs rotary positional embeddings in decoupled cross-attention layers to achieve precise controllability for musical attribute conditioning, as well as audio inpainting and outpainting.
15,022
2506.18729
[ -0.004846394527703524, -0.034961357712745667, -0.0014921105466783047, 0.049770765006542206, 0.04825100302696228, -0.005130887031555176, 0.03333751857280731, -0.013866524212062359, -0.041149962693452835, -0.048985205590724945, -0.001251023611985147, 0.024023516103625298, -0.05407227203249931,...
https://github.com/fundwotsai2001/MuseControlLite
275
Diffusion Models are Secretly Exchangeable: Parallelizing DDPMs via Auto Speculation
https://openreview.net/forum?id=n08niE37ku
[ "Hengyuan Hu", "Aniket Das", "Dorsa Sadigh", "Nima Anari" ]
Poster
Denoising Diffusion Probabilistic Models (DDPMs) have emerged as powerful tools for generative modeling. However, their sequential computation requirements lead to significant inference-time bottlenecks. In this work, we utilize the connection between DDPMs and Stochastic Localization to prove that, under an appropriat...
DDPM, Stochastic Localization, speculative decoding
null
15,015
null
[ -0.003942302893847227, 0.00008476361108478159, -0.01699342392385006, 0.05319308862090111, 0.0499882698059082, 0.05030309781432152, 0.03727134317159653, 0.012433535419404507, 0.0011981242569163442, -0.051215678453445435, 0.023110518231987953, -0.031556498259305954, -0.04477324336767197, 0.0...
null
276
Does Data Scaling Lead to Visual Compositional Generalization?
https://openreview.net/forum?id=M2WMUuwoh5
[ "Arnas Uselis", "Andrea Dittadi", "Seong Joon Oh" ]
Poster
Compositional understanding is crucial for human intelligence, yet it remains unclear whether contemporary vision models exhibit it. The dominant machine learning paradigm is built on the premise that scaling data and model sizes will improve out-of-distribution performance, including compositional generalization. We t...
ML, compositionality, generalization, OOD generalization, scaling, transfer learning
null
15,013
2507.07102
[ 0.01145224180072546, -0.003375240135937929, 0.004195031709969044, 0.044867031276226044, 0.032145947217941284, 0.015255313366651535, 0.015906676650047302, 0.02188793383538723, -0.04698110744357109, -0.028661321848630905, -0.0250301044434309, -0.009317475371062756, -0.07989797741174698, 0.01...
https://github.com/oshapio/visual-compositional-generalization
277
NEAR: Neural Electromagnetic Array Response
https://openreview.net/forum?id=yXcY4wKAG7
[ "Yinyan Bu", "Jiajie Yu", "Kai Zheng", "Xinyu Zhang", "Piya Pal" ]
Poster
We address the challenge of achieving angular super-resolution in multi-antenna radar systems that are widely used for localization, navigation, and automotive perception. A multi-antenna radar achieves very high resolution by computationally creating a large virtual sensing system using very few physical antennas. How...
Implicit Neural Representation, Untrained Neural Network, Radar Sensing, Super-resolution, Physics-informed Neural Network
null
15,009
null
[ -0.014590131118893623, 0.005393107421696186, 0.02519463561475277, 0.003772300435230136, 0.03226276859641075, 0.010265118442475796, 0.005376018583774567, -0.004041665233671665, -0.049227453768253326, -0.04678460955619812, -0.0015359811950474977, 0.016229692846536636, -0.026085184887051582, ...
null
278
Dynamical phases of short-term memory mechanisms in RNNs
https://openreview.net/forum?id=ybBuwgOPOd
[ "Bariscan Kurtkaya", "Fatih Dinc", "Mert Yuksekgonul", "Marta Blanco-Pozo", "Ege Cirakman", "Mark Schnitzer", "Yucel Yemez", "Hidenori Tanaka", "Peng Yuan", "Nina Miolane" ]
Poster
Short-term memory is essential for cognitive processing, yet our understanding of its neural mechanisms remains unclear. Neuroscience has long focused on how sequential activity patterns, where neurons fire one after another within large networks, can explain how information is maintained. While recurrent connections w...
Dynamical system theory, computational neuroscience, working memory
null
15,002
2502.17433
[ -0.035083018243312836, 0.002353942021727562, -0.03233511000871658, 0.019984008744359016, 0.023751012980937958, 0.01800467260181904, 0.05792640149593353, 0.037269264459609985, -0.09012721478939056, -0.019357284530997276, -0.002307633636519313, -0.00890831183642149, -0.03782777115702629, 0.0...
https://github.com/fatihdinc/dynamical-phases-stm
279
Improved Approximations for Hard Graph Problems using Predictions
https://openreview.net/forum?id=5QMJZiHuGn
[ "Anders Aamand", "Justin Y. Chen", "Siddharth Gollapudi", "Sandeep Silwal", "Hao WU" ]
Poster
We design improved approximation algorithms for NP-hard graph problems by incorporating predictions (e.g., learned from past data). Our prediction model builds upon and extends the $\varepsilon$-prediction framework by Cohen-Addad, d'Orsi, Gupta, Lee, and Panigrahi (NeurIPS 2024). We consider an edge-based version of ...
learning-augmented algorithms, learning augmented algorithms, data driven algorithms, data-driven algorithms, combinatorial optimization, approximation algorithms
We give learning-augmented approximation algorithms for fundamental NP-Hard graph problems using edge predictions, leading to improved approximation ratios.
14,999
2505.23967
[ -0.014744375832378864, -0.004203450866043568, 0.012503072619438171, 0.0552925206720829, 0.04840073361992836, 0.03874996677041054, 0.014681069180369377, -0.00892048142850399, -0.023264529183506966, -0.05026504024863243, 0.00021316860511433333, -0.01308341696858406, -0.09705336391925812, 0.0...
null
280
Why Has Predicting Downstream Capabilities of Frontier AI Models with Scale Remained Elusive?
https://openreview.net/forum?id=I1NtlLvJal
[ "Rylan Schaeffer", "Hailey Schoelkopf", "Brando Miranda", "Gabriel Mukobi", "Varun Madan", "Adam Ibrahim", "Herbie Bradley", "Stella Biderman", "Sanmi Koyejo" ]
Poster
Predictable behavior from scaling advanced AI systems is an extremely desirable property for engineers, companies, economists and governments alike, and while a well-established literature exists on how pretraining performance scales, predictable scaling behavior on downstream capabilities remains elusive. While many f...
evaluations, benchmarks, scaling laws, emergent abilities, capabilities, frontier models, foundation models
What makes predicting downstream capabilities of frontier AI models with scale difficult?
14,995
2406.04391
[ -0.026424549520015717, -0.005961464252322912, 0.013731131330132484, 0.020755987614393234, 0.06073049455881119, 0.01776282861828804, 0.01690148562192917, 0.010898569598793983, -0.034946102648973465, -0.0008687393274158239, 0.0009350773179903626, 0.008366349153220654, -0.052485015243291855, ...
https://github.com/RylanSchaeffer/KoyejoLab-Why-Has-Predicting-Downstream-Capabilities-Remained-Elusive
281
Nearly Optimal Algorithms for Contextual Dueling Bandits from Adversarial Feedback
https://openreview.net/forum?id=ATNEHkXFrW
[ "Qiwei Di", "Jiafan He", "Quanquan Gu" ]
Poster
Learning from human feedback plays an important role in aligning generative models, such as large language models (LLM). However, the effectiveness of this approach can be influenced by adversaries, who may intentionally provide misleading preferences to manipulate the output in an undesirable or harmful direction. To ...
Dueling Bandits, Adversarial feedback, optimal, uncertainty-weighted maximum likelihood estimation.
We study a specific model within this problem domain--contextual dueling bandits with adversarial feedback, where the true preference label can be flipped by an adversary.
14,986
2404.10776
[ -0.03149238973855972, -0.006163422018289566, -0.011271717958152294, 0.06139526143670082, -0.0049213385209441185, 0.02128284052014351, 0.01562128309160471, 0.024255655705928802, -0.024427007883787155, -0.03393753618001938, -0.026757126674056053, 0.04484300687909126, -0.055496763437986374, -...
null
282
Contradiction Retrieval via Contrastive Learning with Sparsity
https://openreview.net/forum?id=VzFXb6Au58
[ "Haike Xu", "Zongyu Lin", "Kai-Wei Chang", "Yizhou Sun", "Piotr Indyk" ]
Poster
Contradiction retrieval refers to identifying and extracting documents that explicitly disagree with or refute the content of a query, which is important to many downstream applications like fact checking and data cleaning. To retrieve contradiction argument to the query from large document corpora, existing methods su...
contradiction retrieval, sentence embedding
We design SparseCL, a novel approach for contradiction retrieval, which leverages specially trained sentence embeddings and a combined metric of cosine similarity and sparsity to efficiently identify documents that contradict a given query.
14,984
null
[ -0.00416797399520874, -0.0304596908390522, -0.02730642631649971, 0.07634615153074265, 0.033641885966062546, -0.029780497774481773, 0.01595683954656124, 0.03158215060830116, -0.04203583300113678, -0.007737336680293083, -0.025154756382107735, 0.04653957113623619, -0.05422766134142876, 0.0015...
null
283
Sounding that Object: Interactive Object-Aware Image to Audio Generation
https://openreview.net/forum?id=6KeALGcu2j
[ "Tingle Li", "Baihe Huang", "Xiaobin Zhuang", "Dongya Jia", "Jiawei Chen", "Yuping Wang", "Zhuo Chen", "Gopala Anumanchipalli", "Yuxuan Wang" ]
Poster
Generating accurate sounds for complex audio-visual scenes is challenging, especially in the presence of multiple objects and sound sources. In this paper, we propose an interactive object-aware audio generation model that grounds sound generation in user-selected visual objects within images. Our method integrates obj...
Sound Generation, Audio-Visual Learning, Multi-Modal Learning
We interactively generate sounds specific to user-selected objects within complex visual scenes.
14,971
2506.04214
[ 0.01234943512827158, 0.012126944959163666, -0.010918401181697845, 0.025690535083413124, 0.018908748403191566, 0.038939572870731354, 0.03852678835391998, 0.01722489297389984, -0.0490560345351696, -0.050999656319618225, -0.0377318449318409, 0.019734537228941917, -0.05044735595583916, 0.00855...
https://github.com/Tinglok/avobject
284
Correlation Clustering Beyond the Pivot Algorithm
https://openreview.net/forum?id=OzQLuoKMQZ
[ "Soheil Behnezhad", "Moses Charikar", "Vincent Cohen-Addad", "Alma Ghafari", "Weiyun ma" ]
Poster
We study the classic correlation clustering problem. Given $n$ objects and a complete labeling of the object-pairs as either “similar” or “dissimilar”, the goal is to partition the objects into arbitrarily many clusters while minimizing disagreements with the labels. A classic Pivot algorithm for this problem, due to...
Correlation Clustering
null
14,970
2404.06797
[ -0.0020892005413770676, 0.002796149579808116, -0.0016731391660869122, 0.039022430777549744, 0.029290853068232536, 0.047367505729198456, -0.006745356135070324, -0.00021647382527589798, -0.04889477416872978, -0.026528742164373398, -0.02312908135354519, -0.008484472520649433, -0.079243831336498...
https://github.com/almaho/Modified_Pivot_Correlation_Clustering
285
Test-time Correlation Alignment
https://openreview.net/forum?id=0dualJz9OI
[ "Linjing You", "Jiabao Lu", "Xiayuan Huang" ]
Poster
Deep neural networks often degrade under distribution shifts. Although domain adaptation offers a solution, privacy constraints often prevent access to source data, making Test-Time Adaptation (TTA)—which adapts using only unlabeled test data—increasingly attractive. However, current TTA methods still face practical ch...
Test-time adaptation, Correlation alignment
A theoretically supported TTA paradigm that effectively address the efficiency and domain forgetting chanllenges by aligning feature correlations.
14,963
2505.00533
[ 0.04039596766233444, 0.0008358502527698874, -0.01609492301940918, 0.061855461448431015, 0.0349796861410141, 0.04545087367296219, 0.033274970948696136, 0.031747810542583466, 0.01943444274365902, -0.02369765006005764, 0.021390527486801147, 0.016885004937648773, -0.06922436505556107, -0.01000...
https://github.com/youlj109/TCA
286
BalancEdit: Dynamically Balancing the Generality-Locality Trade-off in Multi-modal Model Editing
https://openreview.net/forum?id=JWtcAlXkMN
[ "Dongliang Guo", "Mengxuan Hu", "Zihan Guan", "Thomas Hartvigsen", "Sheng Li" ]
Poster
Large multi-modal models inevitably decay over time as facts update and previously learned information becomes outdated. Traditional approaches such as fine-tuning are often impractical for updating these models due to their size and complexity. Instead, direct knowledge editing within the models presents a more viable...
Model editing, Multi-modal model
null
14,934
2505.01343
[ 0.011372456327080727, -0.04069177806377411, 0.0028959657065570354, 0.05550006777048111, 0.029849251732230186, 0.02249530702829361, 0.022294258698821068, 0.007643044926226139, -0.029366683214902878, -0.061216991394758224, -0.0017135454108938575, 0.011845948174595833, -0.12021094560623169, -...
https://github.com/donglgcn/BalancEdit/tree/MMOKVQA
287
DyCodeEval: Dynamic Benchmarking of Reasoning Capabilities in Code Large Language Models Under Data Contamination
https://openreview.net/forum?id=3BZyQqbytZ
[ "Simin Chen", "Pranav Pusarla", "Baishakhi Ray" ]
Poster
The rapid advancement of code large language models (Code LLMs) underscores the critical need for effective and transparent benchmarking methods. However, current benchmarking predominantly relies on publicly available, human-created datasets. The widespread use of these static benchmark datasets makes the evaluation p...
benchmarking, code generation, large language model, trustworthy ML
We identify the limitations of static benchmarking for code LLMs and propose a dynamic benchmarking approach.
14,922
null
[ -0.011711313389241695, -0.00371113745495677, -0.03112262301146984, 0.04778706282377243, 0.03787446394562721, 0.03786424547433853, 0.017736759036779404, 0.01395313162356615, -0.02446199208498001, -0.010058851912617683, -0.03182158246636391, 0.020092325285077095, -0.06336382031440735, -0.001...
null
288
ResKoopNet: Learning Koopman Representations for Complex Dynamics with Spectral Residuals
https://openreview.net/forum?id=Svk7jjhlSu
[ "Yuanchao Xu", "Kaidi Shao", "Nikos K. Logothetis", "Zhongwei Shen" ]
Poster
Analyzing the long-term behavior of high-dimensional nonlinear dynamical systems remains a significant challenge. While the Koopman operator framework provides a powerful global linearization tool, current methods for approximating its spectral components often face theoretical limitations and depend on predefined dict...
Koopman operator, data driven dynamical system, dictionary learning, spectral analysis
null
14,918
2501.00701
[ -0.06601136177778244, -0.011969898827373981, -0.018159231171011925, 0.05274466797709465, 0.04718836769461632, 0.024722237139940262, 0.04072680324316025, -0.017682228237390518, -0.041049160063266754, -0.047550853341817856, 0.0484577976167202, 0.009724407456815243, -0.0887448787689209, 0.003...
https://github.com/talkingdoll/reskoopnet
289
Compositional Flows for 3D Molecule and Synthesis Pathway Co-design
https://openreview.net/forum?id=4aXfSLfM0Z
[ "Tony Shen", "Seonghwan Seo", "Ross Irwin", "Kieran Didi", "Simon Olsson", "Woo Youn Kim", "Martin Ester" ]
Poster
Many generative applications, such as synthesis-based 3D molecular design, involve constructing compositional objects with continuous features. Here, we introduce Compositional Generative Flows (CGFlow), a novel framework that extends flow matching to generate objects in compositional steps while modeling continuous st...
drug discovery, synthesizable molecular design, GFlowNets, flow matching
GFlowNets meet flow matching for 3D molecules generation via synthesis pathways.
14,914
2504.08051
[ 0.022183502092957497, 0.00542400311678648, 0.015755441039800644, 0.028120646253228188, 0.043903592973947525, 0.00870116800069809, -0.003147105686366558, 0.007296334486454725, 0.01828809082508087, -0.05525711923837662, 0.004347121808677912, -0.02462010271847248, -0.07632960379123688, 0.0243...
null
290
Aligning Protein Conformation Ensemble Generation with Physical Feedback
https://openreview.net/forum?id=Asr955jcuZ
[ "Jiarui Lu", "Xiaoyin Chen", "Stephen Zhewen Lu", "Aurelie Lozano", "Vijil Chenthamarakshan", "Payel Das", "Jian Tang" ]
Poster
Protein dynamics play a crucial role in protein biological functions and properties, and their traditional study typically relies on time-consuming molecular dynamics (MD) simulations conducted in silico. Recent advances in generative modeling, particularly denoising diffusion models, have enabled efficient accurate pr...
Protein, generative models, molecular dynamics, conformation generation, alignments
incorporating physical energy feedback into diffusion model for better protein conformation ensemble generation
14,909
2505.24203
[ -0.004185753874480724, 0.008983071893453598, -0.026231693103909492, 0.042778048664331436, 0.033782511949539185, -0.008903831243515015, 0.021463017910718918, 0.0063123442232608795, -0.007802275940775871, -0.03754081204533577, 0.016271570697426796, -0.008098814636468887, -0.10501201450824738, ...
https://github.com/lujiarui/eba
291
Hi Robot: Open-Ended Instruction Following with Hierarchical Vision-Language-Action Models
https://openreview.net/forum?id=lNVHg9npif
[ "Lucy Xiaoyang Shi", "brian ichter", "Michael Robert Equi", "Liyiming Ke", "Karl Pertsch", "Quan Vuong", "James Tanner", "Anna Walling", "Haohuan Wang", "Niccolo Fusai", "Adrian Li-Bell", "Danny Driess", "Lachy Groom", "Sergey Levine", "Chelsea Finn" ]
Poster
Generalist robots that can perform a range of different tasks in open-world settings must be able to not only reason about the steps needed to accomplish their goals, but also process complex instructions, prompts, and even feedback during task execution. Intricate instructions (e.g., "Could you make me a vegetarian sa...
Machine Learning, Robotics, Language, Vision-Language Models
Hi Robot enables robots to follow open-ended, complex instructions, adapt to feedback, and interact with humans.
14,903
2502.19417
[ -0.02603282406926155, 0.03387001156806946, -0.017942115664482117, 0.023734625428915024, 0.027871916070580482, -0.006912918761372566, 0.028905194252729416, 0.03227623552083969, -0.032071877270936966, -0.027317265048623085, -0.04550918936729431, 0.02304294891655445, -0.06261199712753296, -0....
null
292
Activation by Interval-wise Dropout: A Simple Way to Prevent Neural Networks from Plasticity Loss
https://openreview.net/forum?id=Y0hjl4L1ve
[ "Sangyeon Park", "Isaac Han", "Seungwon Oh", "KyungJoong Kim" ]
Poster
Plasticity loss, a critical challenge in neural network training, limits a model's ability to adapt to new tasks or shifts in data distribution. While widely used techniques like L2 regularization and Layer Normalization have proven effective in mitigating this issue, Dropout remains notably ineffective. This paper int...
loss of plasticity, plasticity, continual learning, reinforcement learning
We introduce AID (Activation by Interval-wise Dropout), a novel method inspired by Dropout, designed to address plasticity loss by applying different dropout probabilities across each preactivation interval.
14,897
2502.01342
[ -0.005008947569876909, -0.03141462057828903, -0.014992604963481426, 0.043200161308050156, 0.022190840914845467, 0.016755271703004837, 0.01519010215997696, 0.010483435355126858, -0.05547108128666878, -0.024632737040519714, -0.01729259453713894, 0.019278444349765778, -0.03996788710355759, -0...
null
293
Pointwise Information Measures as Confidence Estimators in Deep Neural Networks: A Comparative Study
https://openreview.net/forum?id=MPlcU7Sxzs
[ "Shelvia Wongso", "Rohan Ghosh", "Mehul Motani" ]
Poster
Estimating the confidence of deep neural network predictions is crucial for safe deployment in high-stakes applications. While softmax probabilities are commonly used, they are often poorly calibrated, and existing calibration methods have been shown to be detrimental to failure prediction. In this paper, we propose us...
information theory, confidence estimation, deep neural networks
null
14,884
null
[ -0.015470151789486408, -0.014447297900915146, -0.015469670295715332, 0.03408997133374214, 0.018329354003071785, 0.03780536726117134, 0.04444383084774017, -0.00263033015653491, -0.022587856277823448, -0.04852363467216492, 0.0069222659803926945, -0.007400186266750097, -0.04741737246513367, -...
https://github.com/kentridgeai/PI-Conf-Est
294
FisherSFT: Data-Efficient Supervised Fine-Tuning of Language Models Using Information Gain
https://openreview.net/forum?id=e02oLEbehE
[ "Rohan Deb", "Kiran Koshy Thekumparampil", "Kousha Kalantari", "Gaurush Hiranandani", "Shoham Sabach", "Branislav Kveton" ]
Poster
Supervised fine-tuning (SFT) is the most common way of adapting large language models (LLMs) to a new domain. In this paper, we improve the efficiency of SFT by selecting an informative subset of training examples. Specifically, for a fixed budget of training examples, which determines the computational cost of fine-tu...
active learning, optimal design, supervised fine-tuning, large language models
null
14,869
2505.14826
[ -0.006001676432788372, -0.03606176748871803, 0.022912781685590744, 0.028898948803544044, 0.039092618972063065, 0.03496572747826576, 0.025257082656025887, 0.003945302218198776, -0.009626910090446472, -0.0005200681043788791, -0.0027652913704514503, 0.05551519989967346, -0.0766899436712265, -...
null
295
Ranking with Multiple Oracles: From Weak to Strong Stochastic Transitivity
https://openreview.net/forum?id=d3PjjtGc07
[ "Tao Jin", "Yue Wu", "Quanquan Gu", "Farzad Farnoud" ]
Poster
We study the problem of efficiently aggregating the preferences of items from multiple information sources (oracles) and infer the ranking under both the weak stochastic transitivity (WST) and the strong stochastic transitivity (SST) conditions. When the underlying preference model satisfies the WST condition, we propo...
active ranking, stochastic transitivity, rank aggregation, dueling bandits, lower bound, upper bound
null
14,866
null
[ -0.030478619039058685, -0.0014093336649239063, 0.006468503270298243, 0.061138831079006195, 0.029983077198266983, 0.022548364475369453, 0.0081200385466218, 0.01636405661702156, -0.015667906031012535, -0.03965429216623306, -0.025871863588690758, 0.007498729042708874, -0.07097805291414261, -0...
null
296
RZ-NAS: Enhancing LLM-guided Neural Architecture Search via Reflective Zero-Cost Strategy
https://openreview.net/forum?id=9UExQpH078
[ "Zipeng Ji", "Guanghui Zhu", "Chunfeng Yuan", "Yihua Huang" ]
Poster
LLM-to-NAS is a promising field at the intersection of Large Language Models (LLMs) and Neural Architecture Search (NAS), as recent research has explored the potential of architecture generation leveraging LLMs on multiple search spaces. However, the existing LLM-to-NAS methods face the challenges of limited search spa...
Automated Machine Learning, Neural Architecture Search, Large Language Models
A novel framework that combines the text- and code-level comprehension capabilities of LLMs with a Reflective Zero-Cost evaluation strategy for NAS
14,855
null
[ 0.0009409794583916664, -0.026727914810180664, 0.005264367908239365, 0.02418585866689682, 0.05849486216902733, 0.0558110848069191, 0.024270251393318176, 0.007025201339274645, -0.022039754316210747, -0.0127024557441473, -0.009951958432793617, 0.0038498882204294205, -0.032826635986566544, -0....
https://github.com/PasaLab/RZ-NAS
297
In-Context Linear Regression Demystified: Training Dynamics and Mechanistic Interpretability of Multi-Head Softmax Attention
https://openreview.net/forum?id=3TM3fxwTps
[ "Jianliang He", "Xintian Pan", "Siyu Chen", "Zhuoran Yang" ]
Poster
We study how multi-head softmax attention models are trained to perform in-context learning on linear data. Through extensive empirical experiments and rigorous theoretical analysis, we demystify the emergence of elegant attention patterns: a diagonal and homogeneous pattern in the key-query weights, and a last-entry-...
Multi-head Attention, Mechanism Intepretation, Training Dynamics, Expressive Power
null
14,853
2503.12734
[ -0.0012744941050186753, 0.030022768303751945, 0.018537016585469246, 0.002015884965658188, 0.005279691889882088, 0.02116508223116398, 0.027989225462079048, 0.00714128278195858, -0.027710769325494766, -0.0126390615478158, -0.053562574088573456, 0.025137778371572495, -0.07167860120534897, 0.0...
https://github.com/XintianPan/ICL_linear
298
Triple-Optimistic Learning for Stochastic Contextual Bandits with General Constraints
https://openreview.net/forum?id=NhJ4cCifqF
[ "Hengquan Guo", "Lingkai Zu", "Xin Liu" ]
Poster
We study contextual bandits with general constraints, where a learner observes contexts and aims to maximize cumulative rewards while satisfying a wide range of general constraints. We introduce the Optimistic$^3$ framework, a novel learning and decision-making approach that integrates optimistic design into parameter ...
tripe-optimitistic framework, contetxual bandits, general constraints
null
14,845
null
[ 0.0065579186193645, 0.02076876536011696, 0.01905086077749729, 0.043759431689977646, 0.01310922671109438, 0.034876592457294464, 0.011888344772160053, 0.01879185251891613, -0.03648024797439575, -0.05127497762441635, -0.00991506315767765, 0.0361916609108448, -0.06193067878484726, -0.046878147...
null
299
Federated In-Context Learning: Iterative Refinement for Improved Answer Quality
https://openreview.net/forum?id=TUk7gCqtmf
[ "Ruhan Wang", "Zhiyong Wang", "Chengkai Huang", "Rui Wang", "Tong Yu", "Lina Yao", "John C.S. Lui", "Dongruo Zhou" ]
Poster
For question-answering (QA) tasks, in-context learning (ICL) enables language models (LMs) to generate responses without modifying their parameters by leveraging examples provided in the input. However, the effectiveness of ICL heavily depends on the availability of high-quality examples, which are often scarce due to ...
Federated Learning; In-context Learning; Large Language Model; Natural Language Processing
We propose Fed-ICL, a communication-efficient and privacy-preserving framework that improves language model performance through federated in-context learning without sharing model parameters or raw data.
14,836
2506.07440
[ 0.006688958965241909, -0.019380126148462296, -0.006453735288232565, 0.06665170937776566, 0.029135366901755333, 0.01771646924316883, 0.008209080435335636, 0.026138370856642723, -0.018840691074728966, 0.01265729870647192, -0.020430563017725945, 0.02674751728773117, -0.04020555689930916, -0.0...
null