title
stringlengths
14
154
paper_url
stringlengths
42
42
authors
listlengths
1
21
type
stringclasses
3 values
abstract
stringlengths
413
2.52k
keywords
stringlengths
4
397
TL;DR
stringlengths
5
250
submission_number
int64
2
14.3k
arxiv_id
stringlengths
10
10
embedding
listlengths
384
384
github_url
stringlengths
0
126
github_stars
int64
0
55k
num_models
int64
0
82
num_datasets
int64
0
7
num_spaces
int64
0
100
DarkBench: Benchmarking Dark Patterns in Large Language Models
https://openreview.net/forum?id=odjMSBSWRt
[ "Esben Kran", "Hieu Minh Nguyen", "Akash Kundu", "Sami Jawhar", "Jinsuk Park", "Mateusz Maria Jurewicz" ]
Oral
We introduce DarkBench, a comprehensive benchmark for detecting dark design patterns—manipulative techniques that influence user behavior—in interactions with large language models (LLMs). Our benchmark comprises 660 prompts across six categories: brand bias, user retention, sycophancy, anthropomorphism, harmful genera...
Dark Patterns, AI Deception, Large Language Models
We introduce DarkBench, a benchmark revealing that many large language models employ manipulative dark design patterns. Organizations developing LLMs should actively recognize and mitigate the impact of dark design patterns to promote ethical Al.
14,257
2503.10728
[ -0.02725336328148842, -0.03766116499900818, 0.009408959187567234, 0.008022491820156574, 0.02661564014852047, -0.041066356003284454, -0.015834596008062363, -0.038164280354976654, 0.06413612514734268, -0.06206069141626358, -0.07550547271966934, -0.046710625290870667, 0.03552878648042679, -0....
0
0
0
0
RM-Bench: Benchmarking Reward Models of Language Models with Subtlety and Style
https://openreview.net/forum?id=QEHrmQPBdd
[ "Yantao Liu", "Zijun Yao", "Rui Min", "Yixin Cao", "Lei Hou", "Juanzi Li" ]
Oral
Reward models are critical in techniques like Reinforcement Learning from Human Feedback (RLHF) and Inference Scaling Laws, where they guide language model alignment and select optimal responses. Despite their importance, existing reward model benchmarks often evaluate models by asking them to distinguish between resp...
Reward Models, Language Models, Evaluation, Alignment
null
13,985
null
[ -0.08573347330093384, -0.07113663852214813, 0.02327777072787285, 0.03225872293114662, 0.0407232865691185, 0.06568308174610138, -0.01666353829205036, -0.042861457914114, 0.08131477236747742, 0.01655031368136406, -0.09637922048568726, -0.0660894364118576, 0.06317998468875885, 0.0042505152523...
0
0
0
0
TopoLM: brain-like spatio-functional organization in a topographic language model
https://openreview.net/forum?id=aWXnKanInf
[ "Neil Rathi", "Johannes Mehrer", "Badr AlKhamissi", "Taha Osama A Binhuraib", "Nicholas Blauch", "Martin Schrimpf" ]
Oral
Neurons in the brain are spatially organized such that neighbors on tissue often exhibit similar response profiles. In the human language system, experimental studies have observed clusters for syntactic and semantic categories, but the mechanisms underlying this functional organization remain unclear. Here, building o...
language modeling, topography, fMRI, neuroscience
We develop a transformer language model with topographically organized units predicting brain-like spatio-functional organization.
13,712
2410.11516
[ 0.04553741216659546, -0.1560545116662979, 0.05150135979056358, 0.026262419298291206, 0.04672883078455925, 0.006334717385470867, -0.019015828147530556, 0.02199351042509079, 0.09638210386037827, -0.017915070056915283, -0.038319721817970276, -0.08659375458955765, 0.029651854187250137, 0.08215...
0
0
0
0
Spider 2.0: Evaluating Language Models on Real-World Enterprise Text-to-SQL Workflows
https://openreview.net/forum?id=XmProj9cPs
[ "Fangyu Lei", "Jixuan Chen", "Yuxiao Ye", "Ruisheng Cao", "Dongchan Shin", "Hongjin SU", "ZHAOQING SUO", "Hongcheng Gao", "Wenjing Hu", "Pengcheng Yin", "Victor Zhong", "Caiming Xiong", "Ruoxi Sun", "Qian Liu", "Sida Wang", "Tao Yu" ]
Oral
Real-world enterprise text-to-SQL workflows often involve complex cloud or local data across various database systems, multiple SQL queries in various dialects, and diverse operations from data transformation to analytics. We introduce Spider 2.0, an evaluation framework comprising $632$ real-world text-to-SQL workflow...
LLM Benchmark, Data Science and Engineering, Code Generation, Text-to-SQL, LLM Agent
A benchmark for enterprise-level Text-to-SQL involving complex databases, challenging tasks, and real-world scenarios.
13,657
2411.07763
[ -0.03679848089814186, -0.04623184725642204, -0.04603664577007294, 0.04519940912723541, -0.014134707860648632, -0.09154586493968964, -0.0011208809446543455, -0.01573074981570244, -0.018215196207165718, -0.00974492821842432, -0.08401896059513092, -0.06710424274206161, 0.04240303114056587, -0...
0
0
0
0
Knowledge Entropy Decay during Language Model Pretraining Hinders New Knowledge Acquisition
https://openreview.net/forum?id=eHehzSDUFp
[ "Jiyeon Kim", "Hyunji Lee", "Hyowon Cho", "Joel Jang", "Hyeonbin Hwang", "Seungpil Won", "Youbin Ahn", "Dohaeng Lee", "Minjoon Seo" ]
Oral
In this work, we investigate how a model's tendency to broadly integrate its parametric knowledge evolves throughout pretraining, and how this behavior affects overall performance, particularly in terms of knowledge acquisition and forgetting. We introduce the concept of knowledge entropy, which quantifies the range of...
knowledge entropy, knowledge acquisition and forgetting, evolving behavior during LLM pretraining
As pretraining progresses, models exhibit narrower integration of memory vectors, reflected by decreasing knowledge entropy, which hinders both knowledge acquisition and retention.
13,581
2410.01380
[ 0.08067429810762405, -0.06784215569496155, 0.0016120108775794506, 0.1156923845410347, 0.041765280067920685, 0.08341261744499207, 0.03904664143919945, -0.009606799110770226, 0.08937966078519821, -0.005971039179712534, 0.00008143703598761931, 0.08783090859651566, 0.08362888544797897, -0.0014...
https://github.com/kaistai/knowledge-entropy
9
0
0
0
Diffusion-Based Planning for Autonomous Driving with Flexible Guidance
https://openreview.net/forum?id=wM2sfVgMDH
[ "Yinan Zheng", "Ruiming Liang", "Kexin ZHENG", "Jinliang Zheng", "Liyuan Mao", "Jianxiong Li", "Weihao Gu", "Rui Ai", "Shengbo Eben Li", "Xianyuan Zhan", "Jingjing Liu" ]
Oral
Achieving human-like driving behaviors in complex open-world environments is a critical challenge in autonomous driving. Contemporary learning-based planning approaches such as imitation learning methods often struggle to balance competing objectives and lack of safety assurance,due to limited adaptability and inadequa...
diffusion planning, autonomous driving
null
13,578
2501.15564
[ 0.003697252133861184, -0.10823868960142136, -0.02056341990828514, 0.062148094177246094, 0.029695900157094002, -0.01762736774981022, -0.07689015567302704, -0.017527373507618904, -0.031918615102767944, -0.024829700589179993, -0.011157851666212082, -0.010659144259989262, -0.0022969350684434175,...
0
0
0
0
Learning to Search from Demonstration Sequences
https://openreview.net/forum?id=v593OaNePQ
[ "Dixant Mittal", "Liwei Kang", "Wee Sun Lee" ]
Oral
Search and planning are essential for solving many real-world problems. However, in numerous learning scenarios, only action-observation sequences, such as demonstrations or instruction sequences, are available for learning. Relying solely on supervised learning with these sequences can lead to sub-optimal performance ...
planning, reasoning, learning to search, reinforcement learning, large language model
We propose a method that constructs search tree in a differetiable manner, and can be trained from just demonstration sequences.
13,425
null
[ -0.05436089262366295, -0.08911566436290741, 0.03811679035425186, -0.0037279443349689245, 0.04582573473453522, 0.03448101133108139, -0.045912884175777435, -0.009847477078437805, -0.018625015392899513, 0.011606894433498383, -0.02260192297399044, -0.011133828200399876, 0.018584217876195908, 0...
0
0
0
0
Measuring and Enhancing Trustworthiness of LLMs in RAG through Grounded Attributions and Learning to Refuse
https://openreview.net/forum?id=Iyrtb9EJBp
[ "Maojia Song", "Shang Hong Sim", "Rishabh Bhardwaj", "Hai Leong Chieu", "Navonil Majumder", "Soujanya Poria" ]
Oral
LLMs are an integral component of retrieval-augmented generation (RAG) systems. While many studies focus on evaluating the overall quality of end-to-end RAG systems, there is a gap in understanding the appropriateness of LLMs for the RAG task. To address this, we introduce Trust-Score, a holistic metric that evaluates ...
Large Language Models, Trustworthiness, Hallucinations, Retrieval Augmented Generation
How to better evaluate and make LLM better for RAG task
13,377
2409.11242
[ -0.1142926886677742, -0.029256857931613922, -0.024079805240035057, -0.007724442984908819, -0.020343737676739693, -0.005263214465230703, 0.03379476070404053, -0.011266677640378475, 0.10013871639966965, -0.02107248827815056, 0.00016581999079789966, -0.03880157321691513, 0.14433534443378448, ...
https://github.com/declare-lab/trust-align
51
0
1
0
MAP: Multi-Human-Value Alignment Palette
https://openreview.net/forum?id=NN6QHwgRrQ
[ "Xinran Wang", "Qi Le", "Ammar Ahmed", "Enmao Diao", "Yi Zhou", "Nathalie Baracaldo", "Jie Ding", "Ali Anwar" ]
Oral
Ensuring that generative AI systems align with human values is essential but challenging, especially when considering multiple human values and their potential trade-offs. Since human values can be personalized and dynamically change over time, the desirable levels of value alignment vary across different ethnic groups...
Human value alignment, Generative model
The paper introduces Multi-Human-Value Alignment Palette (MAP), a novel approach to align generative models with multiple human values in a principled way.
13,248
2410.19198
[ -0.02129107341170311, 0.006724673323333263, -0.036662619560956955, -0.06650504469871521, -0.016665300354361534, 0.04870571568608284, 0.0426325760781765, 0.0040106638334691525, 0.013574006967246532, -0.04951547458767891, -0.03328584507107735, -0.14591453969478607, 0.00524666765704751, 0.017...
0
0
0
0
Can Neural Networks Achieve Optimal Computational-statistical Tradeoff? An Analysis on Single-Index Model
https://openreview.net/forum?id=is4nCVkSFA
[ "Siyu Chen", "Beining Wu", "Miao Lu", "Zhuoran Yang", "Tianhao Wang" ]
Oral
In this work, we tackle the following question: Can neural networks trained with gradient-based methods achieve the optimal statistical-computational tradeoff in learning Gaussian single-index models? Prior research has shown that any polynomial-time algorithm under the statistical query (SQ) framework requires $\Omeg...
single-index model, feature learning, gradient-based method, computational-statistical tradeoff
We propose a unified gradient-based algorithm for feature learning in Gaussian single-index model with sample complexity matching the SQ lower bound
13,084
null
[ -0.10704405605792999, -0.10548006743192673, 0.07906994968652725, 0.09473977982997894, 0.049008872359991074, 0.04187753424048424, -0.027378041297197342, -0.022082936018705368, 0.02989555150270462, -0.05141766369342804, -0.0634932816028595, 0.028668809682130814, 0.016813315451145172, -0.0023...
0
0
0
0
Consistency Checks for Language Model Forecasters
https://openreview.net/forum?id=r5IXBlTCGc
[ "Daniel Paleka", "Abhimanyu Pallavi Sudhir", "Alejandro Alvarez", "Vineeth Bhat", "Adam Shen", "Evan Wang", "Florian Tramèr" ]
Oral
Forecasting is a task that is difficult to evaluate: the ground truth can only be known in the future. Recent work showing LLM forecasters rapidly approaching human-level performance begs the question: how can we benchmark and evaluate these forecasters *instantaneously*? Following the consistency check framework, we m...
forecasting, markets, trading, LLM, evaluation, eval, consistency, robustness
It is difficult to evaluate AI forecasters instantaneously; we propose market-based consistency evals on LLM forecasters and show plenty of inconsistency.
13,065
2412.18544
[ -0.07717876136302948, -0.09865415841341019, -0.04396665096282959, 0.050274740904569626, 0.033055178821086884, 0.021076753735542297, 0.012797188013792038, -0.0023351050913333893, 0.012750214897096157, -0.03271811455488205, -0.11732664704322815, -0.07693871855735779, 0.03273208066821098, 0.0...
0
0
0
0
Spread Preference Annotation: Direct Preference Judgment for Efficient LLM Alignment
https://openreview.net/forum?id=BPgK5XW1Nb
[ "Dongyoung Kim", "Kimin Lee", "Jinwoo Shin", "Jaehyung Kim" ]
Oral
Aligning large language models (LLMs) with human preferences becomes a key component to obtaining state-of-the-art performance, but it yields a huge cost to construct a large human-annotated preference dataset. To tackle this problem, we propose a new framework, Spread Preference Annotation with direct preference judgm...
large language model, alignment, preference
null
12,928
2406.04412
[ -0.09205672889947891, -0.08180037140846252, -0.01853586733341217, 0.02387421391904354, 0.05137999355792999, 0.04248807579278946, 0.025502393022179604, 0.022205257788300514, 0.020403213798999786, -0.004675958771258593, 0.011237965896725655, -0.0845503956079483, 0.04652528092265129, 0.034931...
0
0
0
0
Brain Bandit: A Biologically Grounded Neural Network for Efficient Control of Exploration
https://openreview.net/forum?id=RWJX5F5I9g
[ "Chen Jiang", "Jiahui An", "Yating Liu", "Ni Ji" ]
Oral
How to balance between exploration and exploitation in an uncertain environment is a central challenge in reinforcement learning. In contrast, humans and animals have demonstrated superior exploration efficiency in novel environments. To understand how the brain’s neural network controls exploration under uncertainty, ...
explore-exploit, stochastic Hopfield network, Thompson sampling, decision under uncertainty, brain-inspired algorithm, reinforcement learning
We demonstrate that a brain-inspired stochastic Hopfield network can achieve efficient, human-like, uncertainty-aware exploration in bandit and MDP tasks.
12,774
null
[ -0.048125628381967545, -0.06399022042751312, 0.02736765518784523, 0.013729391619563103, -0.026136204600334167, -0.013279981911182404, 0.06835620850324631, -0.022411705926060677, 0.06214163452386856, -0.04466582462191582, -0.05740915611386299, -0.01716732047498226, 0.03107030689716339, 0.01...
0
0
0
0
MaestroMotif: Skill Design from Artificial Intelligence Feedback
https://openreview.net/forum?id=or8mMhmyRV
[ "Martin Klissarov", "Mikael Henaff", "Roberta Raileanu", "Shagun Sodhani", "Pascal Vincent", "Amy Zhang", "Pierre-Luc Bacon", "Doina Precup", "Marlos C. Machado", "Pierluca D'Oro" ]
Oral
Describing skills in natural language has the potential to provide an accessible way to inject human knowledge about decision-making into an AI system. We present MaestroMotif, a method for AI-assisted skill design, which yields high-performing and adaptable agents. MaestroMotif leverages the capabilities of Large Lang...
Hierarchical RL, Reinforcement Learning, LLMs
A method for AI-assisted skill design via Motif and LLM code generation, solving tasks zero-shot from language descriptions on NetHack.
12,735
2412.08542
[ -0.023494863882660866, -0.0449480339884758, 0.025618351995944977, 0.07570120692253113, -0.020263411104679108, -0.00032232367084361613, -0.005257580894976854, 0.03576262295246124, -0.04352348670363426, 0.013138490729033947, -0.07918298244476318, -0.09636082500219345, 0.06023058295249939, -0...
0
0
0
0
Learning to Discover Regulatory Elements for Gene Expression Prediction
https://openreview.net/forum?id=Mfnh1Sqdwf
[ "Xingyu Su", "Haiyang Yu", "Degui Zhi", "Shuiwang Ji" ]
Oral
We consider the problem of predicting gene expressions from DNA sequences. A key challenge of this task is to find the regulatory elements that control gene expressions. Here, we introduce Seq2Exp, a Sequence to Expression network explicitly designed to discover and extract regulatory elements that drive target gene ex...
Gene Expression, Deep Learning, Sequence Modeling
null
12,644
2502.13991
[ -0.0845017358660698, -0.025793347507715225, 0.02984725870192051, 0.04579932615160942, 0.11021523922681808, -0.0022563128732144833, 0.0075287725776433945, -0.06440958380699158, -0.030917929485440254, -0.03083932027220726, -0.048635661602020264, -0.05303158238530159, -0.02617608569562435, -0...
https://github.com/divelab/AIRS
615
1
1
0
Block Diffusion: Interpolating Between Autoregressive and Diffusion Language Models
https://openreview.net/forum?id=tyEyYT267x
[ "Marianne Arriola", "Aaron Gokaslan", "Justin T Chiu", "Zhihan Yang", "Zhixuan Qi", "Jiaqi Han", "Subham Sekhar Sahoo", "Volodymyr Kuleshov" ]
Oral
Diffusion language models offer unique benefits over autoregressive models due to their potential for parallelized generation and controllability, yet they lag in likelihood modeling and are limited to fixed-length generation. In this work, we introduce a class of block diffusion language models that interpolate betwee...
Diffusion Models, Text Diffusion, Generative Models
null
12,566
2503.09573
[ -0.10416851937770844, -0.12655200064182281, 0.026569997891783714, 0.018532857298851013, -0.012075117789208889, -0.006582919973880053, -0.05516336113214493, -0.04498690366744995, 0.07734091579914093, -0.0758652314543724, -0.022044286131858826, -0.03679569438099861, -0.03500502184033394, -0....
https://github.com/kuleshov-group/bd3lms
556
8
0
0
Syntactic and Semantic Control of Large Language Models via Sequential Monte Carlo
https://openreview.net/forum?id=xoXn62FzD0
[ "João Loula", "Benjamin LeBrun", "Li Du", "Ben Lipkin", "Clemente Pasti", "Gabriel Grand", "Tianyu Liu", "Yahya Emara", "Marjorie Freedman", "Jason Eisner", "Ryan Cotterell", "Vikash Mansinghka", "Alexander K. Lew", "Tim Vieira", "Timothy J. O'Donnell" ]
Oral
A wide range of LM applications require generating text that conforms to syntactic or semantic constraints. Imposing such constraints can be naturally framed as probabilistic conditioning, but exact generation from the resulting distribution—which can differ substantially from the LM’s base distribution—is generally in...
Sequential Monte Carlo, Language Models, Semantic parsing, Bayesian inference, Probabilistic programming, SMC
We introduce a sequential Monte Carlo framework for controlling LMs at inference time via both syntactic and semantic constraints.
12,536
null
[ -0.06695172190666199, -0.09970736503601074, 0.004899227526038885, 0.06267884373664856, 0.03959203138947487, -0.035926658660173416, -0.04199553653597832, -0.024605117738246918, -0.003779246238991618, -0.033395905047655106, -0.02965357154607773, -0.08184044063091278, 0.08553116768598557, -0....
0
0
0
0
Scaling Laws for Precision
https://openreview.net/forum?id=wg1PCg3CUP
[ "Tanishq Kumar", "Zachary Ankner", "Benjamin Frederick Spector", "Blake Bordelon", "Niklas Muennighoff", "Mansheej Paul", "Cengiz Pehlevan", "Christopher Re", "Aditi Raghunathan" ]
Oral
Low precision training and inference affect both the quality and cost of language models, but current scaling laws do not account for this. In this work, we devise "precision-aware" scaling laws for both training and inference. We propose that training in lower precision reduces the model's "effective parameter count,"...
quantization, scaling laws, precision, language models
We model the effects of precision on language model loss scaling, both during and after training. We find that overtrained models degrade more when quantized at inference time, and that training larger models in lower precision can be optimal.
12,529
2411.04330
[ -0.02280518412590027, -0.0446615144610405, 0.027057068422436714, 0.09494249522686005, 0.0349905900657177, 0.05653073266148567, -0.028869183734059334, 0.031977467238903046, 0.03930889815092087, -0.06456945091485977, -0.048771847039461136, -0.03631022572517395, -0.007628241553902626, 0.03554...
0
0
0
0
Context-Parametric Inversion: Why Instruction Finetuning May Not Actually Improve Context Reliance
https://openreview.net/forum?id=SPS6HzVzyt
[ "Sachin Goyal", "Christina Baek", "J Zico Kolter", "Aditi Raghunathan" ]
Oral
Large Language Model's are instruction-finetuned to enhance their ability to follow user instructions and better comprehend input context. Still, they often struggle to follow the input context, especially when it contradicts model's parametric knowledge. This manifests as various failures, such as hallucinations where...
Instruction finetuning, context-vs-parametric reliance
We highlight a surprising phenomenon, where the context reliance of the model decreases unexpectedly, with instruction finetuning, despite an initial increase.
12,499
2410.10796
[ 0.02301429957151413, -0.08786125481128693, 0.04208764061331749, 0.03361399844288826, -0.004300509579479694, -0.04522928595542908, 0.0048074680380523205, -0.014354490675032139, 0.08749780803918839, -0.015223857015371323, 0.03223327919840813, 0.006410186178982258, 0.03412650525569916, -0.053...
0
0
0
0
Inference Scaling for Long-Context Retrieval Augmented Generation
https://openreview.net/forum?id=FSjIrOm1vz
[ "Zhenrui Yue", "Honglei Zhuang", "Aijun Bai", "Kai Hui", "Rolf Jagerman", "Hansi Zeng", "Zhen Qin", "Dong Wang", "Xuanhui Wang", "Michael Bendersky" ]
Oral
The scaling of inference computation has unlocked the potential of long-context large language models (LLMs) across diverse settings. For knowledge-intensive tasks, the increased compute is often allocated to incorporate more external knowledge. However, without effectively utilizing such knowledge, solely expanding c...
inference scaling, long-context LLM, retrieval augmented generation
null
12,199
2410.04343
[ -0.02200368605554104, -0.0215761661529541, -0.034957874566316605, 0.08225730806589127, 0.06929649412631989, 0.00001974038968910463, 0.030747350305318832, 0.02639642357826233, 0.03313828632235527, -0.028458653017878532, -0.04045579582452774, -0.017510894685983658, 0.05551769584417343, -0.01...
0
0
0
0
Scaling LLM Test-Time Compute Optimally Can be More Effective than Scaling Parameters for Reasoning
https://openreview.net/forum?id=4FWAwZtd2n
[ "Charlie Victor Snell", "Jaehoon Lee", "Kelvin Xu", "Aviral Kumar" ]
Oral
Enabling LLMs to improve their outputs by using more test-time compute is a critical step towards building self-improving agents that can operate on open-ended natural language. In this paper, we scale up inference-time computation in LLMs, with a focus on answering: if an LLM is allowed to use a fixed but non-trivial ...
test-time compute, LLMs, scaling, language models
We find that by optimally scaling test-time compute we can outperform much larger models in a FLOPs matched evaluation.
12,182
null
[ -0.042627375572919846, -0.022569414228200912, -0.024731997400522232, 0.06360125541687012, 0.0603608675301075, -0.01802038960158825, -0.04073275253176689, 0.03412944823503494, 0.04735749587416649, 0.010439570061862469, -0.04172227159142494, -0.08073140680789948, -0.0088804941624403, 0.03055...
0
0
0
0
Capturing the Temporal Dependence of Training Data Influence
https://openreview.net/forum?id=uHLgDEgiS5
[ "Jiachen T. Wang", "Dawn Song", "James Zou", "Prateek Mittal", "Ruoxi Jia" ]
Oral
Traditional data influence estimation methods, like influence function, assume that learning algorithms are permutation-invariant with respect to training data. However, modern training paradigms—especially for foundation models using stochastic algorithms and non-convergent, multi-stage curricula—are sensitive to data...
data attribution
We introduce data value embedding, a novel framework for real-time data attribution technique that approximates trajectory-specific leave-one-out (LOO) error.
12,172
2412.09538
[ -0.046189289540052414, -0.060355495661497116, 0.048195477575063705, 0.036249641329050064, 0.11318808794021606, 0.007399837952107191, -0.015630926936864853, -0.07616320997476578, 0.04483131691813469, -0.05962204188108444, -0.031303223222494125, 0.027154631912708282, 0.057596053928136826, -0...
0
0
0
0
Self-Improvement in Language Models: The Sharpening Mechanism
https://openreview.net/forum?id=WJaUkwci9o
[ "Audrey Huang", "Adam Block", "Dylan J Foster", "Dhruv Rohatgi", "Cyril Zhang", "Max Simchowitz", "Jordan T. Ash", "Akshay Krishnamurthy" ]
Oral
Recent work in language modeling has raised the possibility of “self-improvement,” where an LLM evaluates and refines its own generations to achieve higher performance without external feedback. It is impossible for this self-improvement to create information that is not already in the model, so why should we expect th...
Learning theory, Sample complexity, Self-Improvement, Language Models
We offer a new theoretical perspective on the possibility of self-improvement in language models.
12,101
2412.01951
[ -0.032348886132240295, -0.06597328186035156, 0.015744319185614586, 0.03451663628220558, -0.0007317238487303257, -0.009865676052868366, 0.008729832246899605, 0.04799865931272507, 0.07856322079896927, -0.024277962744235992, -0.041931819170713425, 0.022515403106808662, -0.006884882692247629, ...
0
0
0
0
Data Shapley in One Training Run
https://openreview.net/forum?id=HD6bWcj87Y
[ "Jiachen T. Wang", "Prateek Mittal", "Dawn Song", "Ruoxi Jia" ]
Oral
Data Shapley offers a principled framework for attributing the contribution of data within machine learning contexts. However, the traditional notion of Data Shapley requires re-training models on various data subsets, which becomes computationally infeasible for large-scale models. Additionally, this retraining-based ...
Shapley value, data valuation.
We develop a new notion of Data Shapley that requires only one model training run.
12,092
2406.11011
[ -0.025477826595306396, -0.05350550264120102, 0.01528963539749384, 0.04293905943632126, 0.06823300570249557, 0.017385026440024376, -0.05833625793457031, -0.05443402752280235, -0.04556816443800926, -0.04826052859425545, -0.020860755816102028, -0.0253619235008955, 0.042847391217947006, -0.022...
0
0
0
0
Computationally Efficient RL under Linear Bellman Completeness for Deterministic Dynamics
https://openreview.net/forum?id=hyfe5q5TD0
[ "Runzhe Wu", "Ayush Sekhari", "Akshay Krishnamurthy", "Wen Sun" ]
Oral
We study computationally and statistically efficient Reinforcement Learning algorithms for the *linear Bellman Complete* setting. This setting uses linear function approximation to capture value functions and unifies existing models like linear Markov Decision Processes (MDP) and Linear Quadratic Regulators (LQR). Whi...
reinforcement learning theory, linear function approximation
null
12,025
2406.11810
[ -0.10184461623430252, -0.036321040242910385, 0.039644110947847366, 0.016787759959697723, -0.024256011471152306, 0.015813812613487244, 0.02908526360988617, -0.10019839555025101, 0.02961423434317112, 0.0736934170126915, -0.026237457990646362, 0.032176364213228226, 0.014639558270573616, 0.046...
0
0
0
0
Linear Representations of Political Perspective Emerge in Large Language Models
https://openreview.net/forum?id=rwqShzb9li
[ "Junsol Kim", "James Evans", "Aaron Schein" ]
Oral
Large language models (LLMs) have demonstrated the ability to generate text that realistically reflects a range of different subjective human perspectives. This paper studies how LLMs are seemingly able to reflect more liberal versus more conservative viewpoints among other political perspectives in American politics. ...
large language model, political perspective, ideology, representation learning
LLMs possess linear representations of political perspectives (left-right) within the activation space. By applying linear interventions to the activation space, we can steer the model's outputs toward a more liberal or conservative stance.
11,965
2503.02080
[ -0.008777395822107792, -0.11435666680335999, 0.0019193971529603004, -0.0014888658188283443, 0.08400566130876541, 0.060511112213134766, -0.008292710408568382, -0.005973759572952986, 0.11787190288305283, -0.015528376214206219, -0.06568486988544464, -0.03187785670161247, 0.05416888743638992, ...
https://github.com/JunsolKim/RepresentationPoliticalLLM
11
0
0
0
Turning Up the Heat: Min-p Sampling for Creative and Coherent LLM Outputs
https://openreview.net/forum?id=FBkpCyujtS
[ "Nguyen Nhat Minh", "Andrew Baker", "Clement Neo", "Allen G Roush", "Andreas Kirsch", "Ravid Shwartz-Ziv" ]
Oral
Large Language Models (LLMs) generate text by sampling the next token from a probability distribution over the vocabulary at each decoding step. Popular sampling methods like top-p (nucleus sampling) often struggle to balance quality and diversity, especially at higher temperatures which lead to incoherent or repetitiv...
Natural Language Processing, Large Language Models, Text Generation, Sampling Methods, Truncation Sampling, Stochastic Sampling, Min-p Sampling, Top-p Sampling, Nucleus Sampling, Temperature Sampling, Decoding Methods, Deep Learning, Artificial Intelligence
Min-p sampling, a dynamic truncation sampler for LLMs, improves text quality and diversity, especially at higher temperature settings.
11,935
null
[ -0.03571171686053276, -0.08311160653829575, 0.03172558173537254, 0.05507004261016846, 0.042257070541381836, -0.03854699060320854, -0.0012609437108039856, -0.0033707930706441402, 0.10229764133691788, 0.01427222415804863, -0.045318853110075, 0.006226471625268459, 0.06792031973600388, -0.0747...
0
0
0
0
Joint Graph Rewiring and Feature Denoising via Spectral Resonance
https://openreview.net/forum?id=zBbZ2vdLzH
[ "Jonas Linkerhägner", "Cheng Shi", "Ivan Dokmanić" ]
Oral
When learning from graph data, the graph and the node features both give noisy information about the node labels. In this paper we propose an algorithm to **j**ointly **d**enoise the features and **r**ewire the graph (JDR), which improves the performance of downstream node classification graph neural nets (GNNs). JDR w...
GNNs, Rewiring, Denoising, Spectral Resonance, cSBM
We introduce joint denoising and rewiring (JDR)—an algorithm to jointly rewire the graph and denoise the features, which improves the performance of downstream node classification GNNs.
11,879
2408.07191
[ -0.04648895189166069, -0.03615644574165344, 0.06285052001476288, 0.012858256697654724, -0.012922929599881172, -0.022357717156410217, -0.03704819083213806, -0.15052193403244019, -0.07632553577423096, -0.032117508351802826, -0.02840270660817623, 0.04604937508702278, -0.016532504931092262, -0...
https://github.com/jlinki/JDR
6
0
0
0
Iterative Nash Policy Optimization: Aligning LLMs with General Preferences via No-Regret Learning
https://openreview.net/forum?id=Pujt3ADZgI
[ "Yuheng Zhang", "Dian Yu", "Baolin Peng", "Linfeng Song", "Ye Tian", "Mingyue Huo", "Nan Jiang", "Haitao Mi", "Dong Yu" ]
Oral
Reinforcement Learning with Human Feedback (RLHF) has achieved great success in aligning large language models (LLMs) with human preferences. Prevalent RLHF approaches are reward-based, following the Bradley-Terry (BT) model assumption, which may not fully capture the complexity of human preferences. In this paper, we ...
RLHF Theory, LLM Alignment
null
11,848
2407.00617
[ -0.06019221618771553, -0.02926032245159149, -0.02100418694317341, -0.03971467539668083, 0.024840664118528366, 0.07565265893936157, 0.03312016278505325, 0.05373146012425423, 0.06334718316793442, 0.051520489156246185, -0.06628355383872986, -0.05309191346168518, 0.09744594246149063, 0.0186667...
0
0
0
0
Progressive Compression with Universally Quantized Diffusion Models
https://openreview.net/forum?id=CxXGvKRDnL
[ "Yibo Yang", "Justus Will", "Stephan Mandt" ]
Oral
Diffusion probabilistic models have achieved mainstream success in many generative modeling tasks, from image generation to inverse problem solving. A distinct feature of these models is that they correspond to deep hierarchical latent variable models optimizing a variational evidence lower bound (ELBO) on the data lik...
diffusion, generative modeling, compression, universal quantization
We improve practical compression with an unconditional diffusion model, proposing a new form of diffusion based on uniform noise instead of Gaussian noise.
11,617
2412.10935
[ -0.03646741434931755, -0.07514941692352295, 0.031239259988069534, -0.009109631180763245, 0.031056508421897888, -0.017138047143816948, -0.06634015589952469, -0.1081470400094986, 0.06415361911058426, -0.0076363873668015, 0.03500659018754959, 0.002604980254545808, -0.0005849702283740044, 0.01...
0
0
0
0
Accelerated training through iterative gradient propagation along the residual path
https://openreview.net/forum?id=JDm7oIcx4Y
[ "Erwan Fagnou", "Paul Caillon", "Blaise Delattre", "Alexandre Allauzen" ]
Oral
Despite being the cornerstone of deep learning, backpropagation is criticized for its inherent sequentiality, which can limit the scalability of very deep models. Such models faced convergence issues due to vanishing gradient, later resolved using residual connections. Variants of these are now widely used in modern ar...
optimization, efficient training
We propose Highway backpropagation, a parallelizable algorithm that accelerates training by leveraging residual connections, achieving significant speedups with minimal performance loss across various architectures.
11,584
2501.17086
[ -0.07533235847949982, -0.0731007307767868, -0.0023864649701863527, 0.06420671194791794, 0.012557107955217361, 0.036061014980077744, -0.09360481053590775, -0.058778487145900726, -0.034124068915843964, -0.0914488136768341, -0.04057881981134415, -0.009349504485726357, -0.010476362891495228, -...
0
0
0
0
Tight Lower Bounds under Asymmetric High-Order Hölder Smoothness and Uniform Convexity
https://openreview.net/forum?id=fMTPkDEhLQ
[ "Site Bai", "Brian Bullins" ]
Oral
In this paper, we provide tight lower bounds for the oracle complexity of minimizing high-order Hölder smooth and uniformly convex functions. Specifically, for a function whose $p^{th}$-order derivatives are Hölder continuous with degree $\nu$ and parameter $H$, and that is uniformly convex with degree $q$ and paramete...
Convex Optimization, Uniform Convexity, Lower Bound, High-Order Method, Regularization, Hölder Smoothness
This paper establishes tight lower bounds for minimizing high-order Hölder smooth and uniformly convex functions with high-order oracle access.
11,481
null
[ -0.05122939497232437, -0.03579254075884819, 0.003268721979111433, -0.05597194656729698, 0.04368181154131889, 0.003818434663116932, 0.02923879586160183, 0.019140232354402542, 0.03748254477977753, 0.026904309168457985, -0.025728808715939522, 0.027478158473968506, 0.06571857631206512, -0.0870...
0
0
0
0
ShEPhERD: Diffusing shape, electrostatics, and pharmacophores for bioisosteric drug design
https://openreview.net/forum?id=KSLkFYHlYg
[ "Keir Adams", "Kento Abeywardane", "Jenna Fromer", "Connor W. Coley" ]
Oral
Engineering molecules to exhibit precise 3D intermolecular interactions with their environment forms the basis of chemical design. In ligand-based drug design, bioisosteric analogues of known bioactive hits are often identified by virtually screening chemical libraries with shape, electrostatic, and pharmacophore simil...
3D molecular generation, drug design, molecules
We design a diffusion model that jointly generates 3D molecules and explicit representations of their 3D shapes, electrostatics, and pharmacophores and demonstrate its utility in bioisosteric drug design
11,461
2411.04130
[ -0.02440534718334675, -0.13811109960079193, 0.05230933427810669, -0.03031476028263569, 0.0019277066458016634, -0.07824879139661789, -0.015966763719916344, 0.02290128543972969, 0.05835382640361786, -0.028825124725699425, -0.04267554730176926, -0.03693340718746185, 0.017329344525933266, 0.09...
0
0
0
0
Restructuring Vector Quantization with the Rotation Trick
https://openreview.net/forum?id=GMwRl2e9Y1
[ "Christopher Fifty", "Ronald Guenther Junkins", "Dennis Duan", "Aniketh Iyengar", "Jerry Weihong Liu", "Ehsan Amid", "Sebastian Thrun", "Christopher Re" ]
Oral
Vector Quantized Variational AutoEncoders (VQ-VAEs) are designed to compress a continuous input to a discrete latent space and reconstruct it with minimal distortion. They operate by maintaining a set of vectors---often referred to as the codebook---and quantizing each encoder output to the nearest vector in the codeb...
Vector Quantization, VQ-VAE
null
11,319
2410.06424
[ -0.0645083636045456, -0.0061912271194159985, -0.046987954527139664, -0.030169950798153877, -0.012205306440591812, 0.05521424114704132, -0.029729096218943596, -0.1098170056939125, 0.029352499172091484, -0.03929441422224045, 0.03788416460156441, -0.02121862955391407, -0.05075156316161156, -0...
https://github.com/lucidrains/vector-quantize-pytorch
3,179
0
0
0
Interpreting Emergent Planning in Model-Free Reinforcement Learning
https://openreview.net/forum?id=DzGe40glxs
[ "Thomas Bush", "Stephen Chung", "Usman Anwar", "Adrià Garriga-Alonso", "David Krueger" ]
Oral
We present the first mechanistic evidence that model-free reinforcement learning agents can learn to plan. This is achieved by applying a methodology based on concept-based interpretability to a model-free agent in Sokoban -- a commonly used benchmark for studying planning. Specifically, we demonstrate that DRC, a gene...
reinforcement learning, interpretability, planning, probes, model-free, mechanistic interpretability, sokoban
We introduce and utilise a concept-based methodology to provide the first non-behavioural evidence that model-free agents can learn to plan
11,267
null
[ -0.04407472908496857, -0.0430087074637413, -0.02125045284628868, 0.08446668088436127, 0.06624895334243774, -0.01085092592984438, -0.06400581449270248, -0.009464561007916927, 0.036692798137664795, 0.09266352653503418, -0.06630163639783859, -0.028020299971103668, -0.010130748152732849, 0.064...
0
0
0
0
Standard Gaussian Process is All You Need for High-Dimensional Bayesian Optimization
https://openreview.net/forum?id=kX8h23UG6v
[ "Zhitong Xu", "Haitao Wang", "Jeff M. Phillips", "Shandian Zhe" ]
Oral
A long-standing belief holds that Bayesian Optimization (BO) with standard Gaussian processes (GP) --- referred to as standard BO --- underperforms in high-dimensional optimization problems. While this belief seems plausible, it lacks both robust empirical evidence and theoretical justification. To address this gap, we...
Gaussian Process, Bayesian Optimization, High Dimensional Bayesian Optimization
We identified a critical failure mode of standard Bayesian Optimization in high-dimensional optimization and proposed a simple yet effective solution
11,190
2402.02746
[ -0.07361572235822678, -0.09579011052846909, 0.024423662573099136, 0.010516107082366943, 0.030340803787112236, -0.05302204191684723, -0.029026849195361137, 0.012501262128353119, -0.03344501554965973, -0.05010722577571869, -0.0063897534273564816, 0.06366454809904099, 0.012369060888886452, -0...
https://github.com/xzt008/standard-gp-is-all-you-need-for-hdbo
11
0
0
0
Limits to scalable evaluation at the frontier: LLM as judge won’t beat twice the data
https://openreview.net/forum?id=NO6Tv6QcDs
[ "Florian E. Dorner", "Vivian Yvonne Nastl", "Moritz Hardt" ]
Oral
High quality annotations are increasingly a bottleneck in the explosively growing machine learning ecosystem. Scalable evaluation methods that avoid costly annotation have therefore become an important research ambition. Many hope to use strong existing models in lieu of costly labels to provide cheap model evaluations...
Evaluation, Benchmarking, Model-as-a-judge, Theory
null
11,163
null
[ -0.011078497394919395, -0.02506617270410061, -0.050171878188848495, 0.03793499246239662, 0.12787580490112305, 0.005419506691396236, -0.043336644768714905, 0.04396884888410568, 0.027983959764242172, -0.011830214411020279, -0.08319313824176788, -0.03291173651814461, 0.020141178742051125, -0....
0
0
0
0
DEPT: Decoupled Embeddings for Pre-training Language Models
https://openreview.net/forum?id=vf5aUZT0Fz
[ "Alex Iacob", "Lorenzo Sani", "Meghdad Kurmanji", "William F. Shen", "Xinchi Qiu", "Dongqi Cai", "Yan Gao", "Nicholas Donald Lane" ]
Oral
Language Model pre-training uses broad data mixtures to enhance performance across domains and languages. However, training on such heterogeneous text corpora requires extensive and expensive efforts. Since these data sources vary significantly in lexical, syntactic, and semantic aspects, they cause negative interferen...
Decentralized Training, Federated Learning, Multi-domain Training, Multilingual Training
We propose DEPT, a pre-training framework that decouples embedding layers from the transformer body, enabling robust training on heterogeneous data, improving generalization, and reducing memory footprint.
11,135
2410.05021
[ -0.048369817435741425, -0.1058519035577774, 0.05737373232841492, 0.03472886234521866, 0.007482144515961409, -0.019100822508335114, -0.04449334368109703, -0.030135346576571465, -0.009911333210766315, -0.02922542579472065, -0.022322028875350952, -0.024406524375081062, 0.03852090239524841, 0....
0
0
0
0
Homomorphism Expressivity of Spectral Invariant Graph Neural Networks
https://openreview.net/forum?id=rdv6yeMFpn
[ "Jingchu Gai", "Yiheng Du", "Bohang Zhang", "Haggai Maron", "Liwei Wang" ]
Oral
Graph spectra are an important class of structural features on graphs that have shown promising results in enhancing Graph Neural Networks (GNNs). Despite their widespread practical use, the theoretical understanding of the power of spectral invariants --- particularly their contribution to GNNs --- remains incomplete....
Graph Neural Network, Expressive Power, Spectral Invariant, Graph Homomorphism, Weisfeiler-Lehman
We analyze the expressive power of spectral invariant graph neural networks from the perspective of graph homomorphisms.
11,126
2503.00485
[ -0.07011917978525162, -0.04011644050478935, 0.036658138036727905, 0.012721475213766098, 0.01566392183303833, 0.010356685146689415, -0.03013918548822403, -0.08334801346063614, -0.004378918558359146, -0.0330771803855896, 0.029225142672657967, -0.058223795145750046, -0.007683051750063896, 0.0...
0
0
0
0
RB-Modulation: Training-Free Stylization using Reference-Based Modulation
https://openreview.net/forum?id=bnINPG5A32
[ "Litu Rout", "Yujia Chen", "Nataniel Ruiz", "Abhishek Kumar", "Constantine Caramanis", "Sanjay Shakkottai", "Wen-Sheng Chu" ]
Oral
We propose Reference-Based Modulation (RB-Modulation), a new plug-and-play solution for training-free personalization of diffusion models. Existing training-free approaches exhibit difficulties in (a) style extraction from reference images in the absence of additional style or content text descriptions, (b) unwanted co...
Inverse Problems, Generative Modeling, Diffusion Models, Posterior Sampling, Optimal Control, Test-time Optimization
We propose Reference-Based Modulation (RB-Modulation), a new plug-and-play test-time optimization method for training-free personalization (stylization and content-style composition) of diffusion models using stochastic optimal control.
11,075
null
[ -0.020243750885128975, -0.047321613878011703, -0.02873917482793331, 0.07877519726753235, 0.12045585364103317, 0.04869963973760605, 0.02957407757639885, -0.004927582107484341, 0.007330374792218208, -0.05852898582816124, -0.05940144509077072, 0.05294011905789375, 0.08012105524539948, -0.0361...
0
0
0
0
Kinetix: Investigating the Training of General Agents through Open-Ended Physics-Based Control Tasks
https://openreview.net/forum?id=zCxGCdzreM
[ "Michael Matthews", "Michael Beukman", "Chris Lu", "Jakob Nicolaus Foerster" ]
Oral
While large models trained with self-supervised learning on offline datasets have shown remarkable capabilities in text and image domains, achieving the same generalisation for agents that act in sequential decision problems remains an open challenge. In this work, we take a step towards this goal by procedurally gener...
reinforcement learning, open-endedness, unsupervised environment design, automatic curriculum learning, benchmark
Training with reinforcement learning on a vast open-ended distribution of physics-based tasks leads to an agent that can zero-shot solve human-designed problems.
10,946
2410.23208
[ -0.04099930450320244, -0.07304088771343231, 0.027428042143583298, 0.03185269981622696, 0.007554244715720415, -0.045912716537714005, -0.03321884572505951, -0.043576113879680634, -0.004022159148007631, 0.03756449371576309, 0.01399239245802164, -0.06029491871595383, 0.014657555148005486, 0.07...
https://github.com/michaeltmatthews/jax2d
55
0
0
0
OptionZero: Planning with Learned Options
https://openreview.net/forum?id=3IFRygQKGL
[ "Po-Wei Huang", "Pei-Chiun Peng", "Hung Guei", "Ti-Rong Wu" ]
Oral
Planning with options -- a sequence of primitive actions -- has been shown effective in reinforcement learning within complex environments. Previous studies have focused on planning with predefined options or learned options through expert demonstration data. Inspired by MuZero, which learns superhuman heuristics witho...
Option, Semi-MDP, MuZero, MCTS, Planning, Reinforcement Learning
This paper presents OptionZero, a method that integrates options into the MuZero algorithm, which autonomously discovers options through self-play games and utilizes options during planning.
10,733
2502.16634
[ -0.019228283315896988, -0.015080958604812622, 0.030623408034443855, -0.0302586629986763, -0.03120587021112442, -0.038513608276844025, 0.045575130730867386, 0.031493544578552246, -0.051263608038425446, 0.0246096383780241, -0.06873791664838791, -0.039749156683683395, 0.006773889064788818, 0....
https://github.com/rlglab/optionzero
12
0
0
0
Instant Policy: In-Context Imitation Learning via Graph Diffusion
https://openreview.net/forum?id=je3GZissZc
[ "Vitalis Vosylius", "Edward Johns" ]
Oral
Following the impressive capabilities of in-context learning with large transformers, In-Context Imitation Learning (ICIL) is a promising opportunity for robotics. We introduce Instant Policy, which learns new tasks instantly from just one or two demonstrations, achieving ICIL through two key components. First, we intr...
In-context Imitation Learning, Robotic Manipulation, Graph Neural Networks, Diffusion Models
We formulate In-Context Imitation Learning as a diffusion-based graph generation problem and learn it using procedurally generated pseudo-demonstrations.
10,684
2411.12633
[ -0.056742195039987564, -0.09866922348737717, 0.040444523096084595, -0.01573552004992962, 0.08020905405282974, -0.03916596248745918, 0.027462545782327652, -0.03943251073360443, 0.021910902112722397, 0.049079373478889465, 0.05605080723762512, -0.039511580020189285, 0.028415558859705925, 0.06...
0
0
0
0
What should a neuron aim for? Designing local objective functions based on information theory
https://openreview.net/forum?id=CLE09ESvul
[ "Andreas Christian Schneider", "Valentin Neuhaus", "David Alexander Ehrlich", "Abdullah Makkeh", "Alexander S Ecker", "Viola Priesemann", "Michael Wibral" ]
Oral
In modern deep neural networks, the learning dynamics of individual neurons are often obscure, as the networks are trained via global optimization. Conversely, biological systems build on self-organized, local learning, achieving robustness and efficiency with limited global information. Here, we show how self-organiza...
local learning, interpretability, neuro-inspired, information theory, partial information decomposition
This paper proposes using Partial Information Decomposition as a local objective for neurons to improve neuron-level interpretability.
10,601
2412.02482
[ -0.04401542618870735, -0.02207105979323387, 0.026856515556573868, -0.01636608876287937, 0.008833199739456177, -0.023319873958826065, 0.040083009749650955, -0.06142977252602577, 0.030269939452409744, -0.00581824267283082, -0.057608045637607574, -0.004082804080098867, 0.028534134849905968, 0...
https://github.com/priesemann-group/infomorphic_networks
5
0
0
0
Cross-Entropy Is All You Need To Invert the Data Generating Process
https://openreview.net/forum?id=hrqNOxpItr
[ "Patrik Reizinger", "Alice Bizeul", "Attila Juhos", "Julia E Vogt", "Randall Balestriero", "Wieland Brendel", "David Klindt" ]
Oral
Supervised learning has become a cornerstone of modern machine learning, yet a comprehensive theory explaining its effectiveness remains elusive. Empirical phenomena, such as neural analogy-making and the linear representation hypothesis, suggest that supervised models can learn interpretable factors of variation in a ...
supervised learning, representation learning, identifiability, linear representation hypothesis
We prove that models trained with cross-entropy in supervised learning can recover latent factors of the data-generating process up to a linear transformation, supported by both theoretical results and empirical evidence.
10,530
2410.21869
[ -0.06765908747911453, -0.10689441859722137, 0.029375553131103516, 0.03684259206056595, 0.12532126903533936, 0.005684150382876396, 0.04610951989889145, -0.10564838349819183, 0.04236248880624771, -0.04262556508183479, -0.04799490049481392, -0.09411440044641495, -0.0010822828626260161, 0.0897...
0
0
0
0
Unlocking State-Tracking in Linear RNNs Through Negative Eigenvalues
https://openreview.net/forum?id=UvTo3tVBk2
[ "Riccardo Grazzi", "Julien Siems", "Arber Zela", "Jörg K.H. Franke", "Frank Hutter", "Massimiliano Pontil" ]
Oral
Linear Recurrent Neural Networks (LRNNs) such as Mamba, RWKV, GLA, mLSTM, and DeltaNet have emerged as efficient alternatives to Transformers for long sequences. However, both Transformers and LRNNs struggle to perform state-tracking, which may impair performance in tasks such as code evaluation. In one forward pass, c...
State Tracking, State Space, Mamba, Linear RNN, Linear Attention, GLA, DeltaNet, Formal Languages, Products of Householders
We show that expanding the eigenvalue range of Linear RNN from [0, 1] to [-1,1] enhances their state-tracking capabilities, enabling them to solve complex tasks like parity and modular counting, while preserving their efficiency in language modeling.
10,504
2411.12537
[ -0.13278412818908691, -0.08794379234313965, -0.06561003625392914, -0.021803325042128563, 0.01167216431349516, 0.04007749259471893, -0.021881945431232452, -0.05825493112206459, -0.018983909860253334, -0.060020577162504196, 0.029568830505013466, -0.021839529275894165, -0.07230459898710251, 0...
https://github.com/automl/unlocking_state_tracking
14
0
0
0
Attention as a Hypernetwork
https://openreview.net/forum?id=V4K9h1qNxE
[ "Simon Schug", "Seijin Kobayashi", "Yassir Akram", "Joao Sacramento", "Razvan Pascanu" ]
Oral
Transformers can under some circumstances generalize to novel problem instances whose constituent parts might have been encountered during training, but whose compositions have not. What mechanisms underlie this ability for compositional generalization? By reformulating multi-head attention as a hypernetwork, we reveal...
attention, compositional generalization, abstract reasoning, in-context learning, transformer, mechanistic interpretability
Multiple heads in attention create reusable operations that support compositional generalization in abstract reasoning.
10,286
2406.05816
[ 0.028306907042860985, -0.04619734734296799, 0.04716020077466965, 0.04049273952841759, 0.038408923894166946, 0.012578210793435574, 0.027401035651564598, -0.08794200420379639, -0.02576117403805256, 0.008804356679320335, -0.027291711419820786, -0.049977321177721024, 0.04247051477432251, 0.058...
https://github.com/smonsays/hypernetwork-attention
30
0
0
0
Transformers Provably Solve Parity Efficiently with Chain of Thought
https://openreview.net/forum?id=n2NidsYDop
[ "Juno Kim", "Taiji Suzuki" ]
Oral
This work provides the first theoretical analysis of training transformers to solve complex problems by recursively generating intermediate states, analogous to fine-tuning for chain-of-thought (CoT) reasoning. We consider training a one-layer transformer to solve the fundamental $k$-parity problem, extending the work ...
transformers, chain of thought, parity, self-consistency
null
10,210
2410.08633
[ -0.1315794289112091, -0.052215300500392914, 0.06480149924755096, 0.008164504542946815, 0.02160336822271347, -0.028866512700915337, -0.005712774116545916, -0.006920715793967247, 0.007421235088258982, 0.030589638277888298, -0.05391032621264458, -0.03604821860790253, 0.049491897225379944, 0.0...
0
0
0
0
Oscillatory State-Space Models
https://openreview.net/forum?id=GRMfXcAAFh
[ "T. Konstantin Rusch", "Daniela Rus" ]
Oral
We propose Linear Oscillatory State-Space models (LinOSS) for efficiently learning on long sequences. Inspired by cortical dynamics of biological neural networks, we base our proposed LinOSS model on a system of forced harmonic oscillators. A stable discretization, integrated over time using fast associative parallel s...
state-space models, sequence models, oscillators, long-range interactions, time-series
Oscillatory state-space models are provably able to learn long-range interactions of arbitrary length, are universal, and achieve state-of-the-art performance in practice.
10,203
2410.03943
[ -0.1186077743768692, -0.08288227021694183, 0.0457480326294899, 0.02636740542948246, -0.01492410060018301, 0.08232278376817703, -0.11109501123428345, -0.06932230293750763, 0.007049995940178633, -0.013528703711926937, -0.03365267440676689, -0.014678915962576866, -0.06945260614156723, 0.05625...
0
0
0
0
Latent Bayesian Optimization via Autoregressive Normalizing Flows
https://openreview.net/forum?id=ZCOwwRAaEl
[ "Seunghun Lee", "Jinyoung Park", "Jaewon Chu", "Minseo Yoon", "Hyunwoo J. Kim" ]
Oral
Bayesian Optimization (BO) has been recognized for its effectiveness in optimizing expensive and complex objective functions. Recent advancements in Latent Bayesian Optimization (LBO) have shown promise by integrating generative models such as variational autoencoders (VAEs) to manage the complexity of high-dimensional...
Bayesian optimization, normalizing flow
null
10,104
null
[ -0.11375445127487183, -0.05898655205965042, 0.03558538854122162, 0.024861987680196762, -0.024429718032479286, 0.01877942867577076, -0.04701966419816017, -0.04853060469031334, 0.050777044147253036, -0.022583916783332825, -0.054064858704805374, -0.07209445536136627, 0.0008081020205281675, 0....
0
0
0
0
Energy-based Backdoor Defense Against Federated Graph Learning
https://openreview.net/forum?id=5Jc7r5aqHJ
[ "Guancheng Wan", "Zitong Shi", "Wenke Huang", "Guibin Zhang", "Dacheng Tao", "Mang Ye" ]
Oral
Federated Graph Learning is rapidly evolving as a privacy-preserving collaborative approach. However, backdoor attacks are increasingly undermining federated systems by injecting carefully designed triggers that lead to the model making incorrect predictions. Trigger structures and injection locations in Federated Grap...
Federated Learning, Graph Learning
null
10,018
null
[ -0.06707899272441864, 0.02260022796690464, 0.019017640501260757, 0.10136149823665619, 0.07875566929578781, -0.061255261301994324, -0.02925245836377144, -0.07719684392213821, 0.009804042987525463, 0.0008010162273421884, -0.03127795085310936, -0.021731572225689888, 0.12283814698457718, -0.02...
0
0
0
0
Reasoning Elicitation in Language Models via Counterfactual Feedback
https://openreview.net/forum?id=VVixJ9QavY
[ "Alihan Hüyük", "Xinnuo Xu", "Jacqueline R. M. A. Maasch", "Aditya V. Nori", "Javier Gonzalez" ]
Oral
Despite the increasing effectiveness of language models, their reasoning capabilities remain underdeveloped. In particular, causal reasoning through counterfactual question answering is lacking. This work aims to bridge this gap. We first derive novel metrics that balance accuracy in factual and counterfactual question...
language models, reasoning, fine-tuning, counterfactuals
New approach to improve reasoning in language models via fine tuning with counterfactual synthetic data
9,840
2410.03767
[ -0.026156794279813766, -0.0715688169002533, 0.02961416356265545, 0.05977457016706467, 0.03315224125981331, -0.00988803245127201, 0.013619373552501202, 0.06086831912398338, -0.00016794624389149249, 0.0288274846971035, -0.07231791317462921, -0.12008178234100342, 0.07333796471357346, 0.062869...
0
0
0
0
CAX: Cellular Automata Accelerated in JAX
https://openreview.net/forum?id=o2Igqm95SJ
[ "Maxence Faldor", "Antoine Cully" ]
Oral
Cellular automata have become a cornerstone for investigating emergence and self-organization across diverse scientific disciplines. However, the absence of a hardware-accelerated cellular automata library limits the exploration of new research directions, hinders collaboration, and impedes reproducibility. In this wor...
cellular automata, emergence, self-organization, neural cellular automata
CAX is a high-performance and flexible open-source library designed to accelerate cellular automata research.
9,779
2410.02651
[ -0.11363959312438965, -0.0043806470930576324, 0.013033541850745678, -0.09348609298467636, -0.014504213817417622, -0.1084546372294426, -0.043261297047138214, -0.08603707700967789, 0.002588615519925952, -0.02862349897623062, -0.03557097911834717, -0.13162867724895477, 0.035078130662441254, 0...
https://github.com/maxencefaldor/cax
171
0
0
0
Proteina: Scaling Flow-based Protein Structure Generative Models
https://openreview.net/forum?id=TVQLu34bdw
[ "Tomas Geffner", "Kieran Didi", "Zuobai Zhang", "Danny Reidenbach", "Zhonglin Cao", "Jason Yim", "Mario Geiger", "Christian Dallago", "Emine Kucukbenli", "Arash Vahdat", "Karsten Kreis" ]
Oral
Recently, diffusion- and flow-based generative models of protein structures have emerged as a powerful tool for de novo protein design. Here, we develop *Proteina*, a new large-scale flow-based protein backbone generator that utilizes hierarchical fold class labels for conditioning and relies on a tailored scalable tra...
protein structure generation, de novo protein design, flow matching, fold class conditioning
We present a novel flow-based protein backbone generative model that uses a new scalable transformer architecture and conditions on fold class labels.
9,667
2503.00710
[ -0.11225531995296478, -0.09974198043346405, 0.0022493007127195597, -0.041086528450250626, -0.009309416636824608, -0.024495191872119904, -0.06656379252672195, 0.03945659101009369, 0.05491337925195694, -0.07042612880468369, -0.0007533821044489741, -0.05706946924328804, -0.029627647250890732, ...
https://github.com/NVIDIA-Digital-Bio/proteina
154
0
0
0
Residual Deep Gaussian Processes on Manifolds
https://openreview.net/forum?id=JWtrk7mprJ
[ "Kacper Wyrwal", "Andreas Krause", "Viacheslav Borovitskiy" ]
Oral
We propose practical deep Gaussian process models on Riemannian manifolds, similar in spirit to residual neural networks. With manifold-to-manifold hidden layers and an arbitrary last layer, they can model manifold- and scalar-valued functions, as well as vector fields. We target data inherently supported on manifolds,...
Gaussian processes, manifolds, deep Gaussian processes, probabilistic methods, variational inference, uncertainty quantification, geometric learning
null
9,577
2411.00161
[ -0.09141256660223007, -0.08559490740299225, 0.10040855407714844, 0.00583454966545105, 0.038020167499780655, -0.039597414433956146, -0.025046110153198242, -0.06289132684469223, -0.021725498139858246, -0.06384878605604172, -0.005712408106774092, 0.018374701961874962, 0.024438170716166496, 0....
https://github.com/KacperWyrwal/residual-deep-gps
5
0
0
0
Learning to Discretize Denoising Diffusion ODEs
https://openreview.net/forum?id=xDrFWUmCne
[ "Vinh Tong", "Dung Trung Hoang", "Anji Liu", "Guy Van den Broeck", "Mathias Niepert" ]
Oral
Diffusion Probabilistic Models (DPMs) are generative models showing competitive performance in various domains, including image synthesis and 3D point cloud generation. Sampling from pre-trained DPMs involves multiple neural function evaluations (NFEs) to transform Gaussian noise samples into images, resulting in highe...
Diffusion models, Efficient Sampling, Ordinary Differentiable Equations
null
9,534
2405.15506
[ -0.08821402490139008, -0.13183453679084778, 0.057442519813776016, 0.015152324922382832, 0.0431428886950016, -0.06414223462343216, -0.024707766249775887, -0.1254081130027771, 0.027979707345366478, -0.05376673862338066, -0.050709448754787445, -0.0659637600183487, -0.038836099207401276, -0.03...
https://github.com/vinhsuhi/ld3
12
0
0
0
Toward Guidance-Free AR Visual Generation via Condition Contrastive Alignment
https://openreview.net/forum?id=kGvXIlIVLM
[ "Huayu Chen", "Hang Su", "Peize Sun", "Jun Zhu" ]
Oral
Classifier-Free Guidance (CFG) is a critical technique for enhancing the sample quality of visual generative models. However, in autoregressive (AR) multi-modal generation, CFG introduces design inconsistencies between language and visual content, contradicting the design philosophy of unifying different modalities for...
autoregressive, generative models, image generation, multimodal, alignment, RLHF, classifier-free guidance
null
9,473
2410.09347
[ -0.021861804649233818, -0.09094460308551788, -0.022580713033676147, 0.05514228716492653, 0.0676737055182457, 0.05482087284326553, -0.046284954994916916, -0.02251693420112133, 0.05920564755797386, -0.051389921456575394, -0.024470727890729904, -0.12831595540046692, 0.011652876622974873, -0.0...
https://github.com/FoundationVision/VAR
7,647
2
0
0
TimeMixer++: A General Time Series Pattern Machine for Universal Predictive Analysis
https://openreview.net/forum?id=1CLzLXSFNn
[ "Shiyu Wang", "Jiawei LI", "Xiaoming Shi", "Zhou Ye", "Baichuan Mo", "Wenze Lin", "Ju Shengtong", "Zhixuan Chu", "Ming Jin" ]
Oral
Time series analysis plays a critical role in numerous applications, supporting tasks such as forecasting, classification, anomaly detection, and imputation. In this work, we present the time series pattern machine (TSPM), a model designed to excel in a broad range of time series tasks through powerful representation a...
time series, pattern machine, predictive analysis
TimeMixer++ is a time series pattern machine that employs multi-scale and multi-resolution pattern extraction to deliver SOTA performance across 8 diverse analytical tasks, including forecasting, classification, anomaly detection, and imputation.
9,409
2410.16032
[ -0.10187283903360367, -0.042228348553180695, 0.05829603970050812, 0.009633166715502739, 0.07753883302211761, -0.013623742386698723, -0.06462331861257553, -0.03763797506690025, 0.01911591924726963, -0.0447213351726532, -0.08432480692863464, -0.06518087536096573, -0.04820000380277634, 0.1262...
https://github.com/kwuking/TimeMixer
1,492
0
0
0
RMP-SAM: Towards Real-Time Multi-Purpose Segment Anything
https://openreview.net/forum?id=1pXzC30ry5
[ "Shilin Xu", "Haobo Yuan", "Qingyu Shi", "Lu Qi", "Jingbo Wang", "Yibo Yang", "Yining Li", "Kai Chen", "Yunhai Tong", "Bernard Ghanem", "Xiangtai Li", "Ming-Hsuan Yang" ]
Oral
Recent segmentation methods, which adopt large-scale data training and transformer architecture, aim to create one foundation model that can perform multiple tasks. However, most of these methods rely on heavy encoder and decoder frameworks, hindering their performance in real-time scenarios. To explore real-ti...
segment anything; real-time segmentation; multi-purpose model;
null
9,393
null
[ -0.022111965343356133, -0.10501755774021149, -0.006779711227864027, -0.007338926196098328, 0.08363655209541321, -0.013354041613638401, -0.010381674394011497, -0.024627620354294777, 0.029229577630758286, -0.10279345512390137, -0.0559607669711113, -0.08294443041086197, -0.08342385292053223, ...
0
0
0
0
Steering Protein Family Design through Profile Bayesian Flow
https://openreview.net/forum?id=PSiijdQjNU
[ "Jingjing Gong", "Yu Pei", "Siyu Long", "Yuxuan Song", "Zhe Zhang", "Wenhao Huang", "Ziyao Cao", "Shuyi Zhang", "Hao Zhou", "Wei-Ying Ma" ]
Oral
Protein family design emerges as a promising alternative by combining the advantages of de novo protein design and mutation-based directed evolution.In this paper, we propose ProfileBFN, the Profile Bayesian Flow Networks, for specifically generative modeling of protein families. ProfileBFN extends the discrete Bayesia...
protein family generation, homologous protein generation, protein design, bayesian flow
null
9,363
2502.07671
[ -0.11299239844083786, -0.11531081795692444, 0.016520647332072258, -0.05579738691449165, -0.0023837124463170767, -0.03263165429234505, -0.0415034145116806, 0.04254526272416115, 0.031545888632535934, -0.08722391724586487, -0.034833405166864395, -0.0504424124956131, 0.00047160746180452406, -0...
0
0
0
0
GeSubNet: Gene Interaction Inference for Disease Subtype Network Generation
https://openreview.net/forum?id=ja4rpheN2n
[ "Ziwei Yang", "Zheng Chen", "Xin Liu", "Rikuto Kotoge", "Peng Chen", "Yasuko Matsubara", "Yasushi Sakurai", "Jimeng Sun" ]
Oral
Retrieving gene functional networks from knowledge databases presents a challenge due to the mismatch between disease networks and subtype-specific variations. Current solutions, including statistical and deep learning methods, often fail to effectively integrate gene interaction knowledge from databases or explicitly ...
Gene Functional Networks, Disease Subtypes, Bioinformatics
Gene interaction inference for disease subtype network generation
9,310
2410.13178
[ -0.06950034201145172, -0.05854470282793045, 0.0606440044939518, 0.020950481295585632, 0.028141025453805923, -0.038332436233758926, -0.04598085954785347, 0.028560813516378403, -0.05605585500597954, -0.06988684833049774, -0.0803946852684021, -0.034090083092451096, -0.014246705919504166, 0.06...
0
0
0
0
Exploring The Loss Landscape Of Regularized Neural Networks Via Convex Duality
https://openreview.net/forum?id=4xWQS2z77v
[ "Sungyoon Kim", "Aaron Mishkin", "Mert Pilanci" ]
Oral
We discuss several aspects of the loss landscape of regularized neural networks: the structure of stationary points, connectivity of optimal solutions, path with non-increasing loss to arbitrary global optimum, and the nonuniqueness of optimal solutions, by casting the problem into an equivalent convex problem and cons...
Convex duality, Machine Learning Theory, Loss Landscape, Optimal Sets
We investigate the loss landscape and topology of the optimal set of neural networks using convex duality.
9,266
2411.07729
[ -0.04503190517425537, -0.06553701311349869, 0.03293576091527939, -0.0012040638830512762, 0.01179457362741232, 0.11062575876712799, 0.0007244473090395331, -0.019779810681939125, 0.04524692893028259, -0.05266604572534561, -0.04235285893082619, 0.04576261341571808, 0.014728076756000519, 0.090...
0
0
0
0
Global Convergence in Neural ODEs: Impact of Activation Functions
https://openreview.net/forum?id=AoraWUmpLU
[ "Tianxiang Gao", "Siyuan Sun", "Hailiang Liu", "Hongyang Gao" ]
Oral
Neural Ordinary Differential Equations (ODEs) have been successful in various applications due to their continuous nature and parameter-sharing efficiency. However, these unique characteristics also introduce challenges in training, particularly with respect to gradient computation accuracy and convergence analysis. In...
Neural ODEs, Gradient Descent, Neural Tangent Kernel (NTK)
null
9,256
null
[ -0.12873046100139618, -0.15814968943595886, 0.054516226053237915, 0.047836191952228546, -0.036124732345342636, -0.005695126950740814, -0.02055223286151886, -0.04179324582219124, -0.007611067034304142, -0.05435239523649216, 0.050825443118810654, -0.01732397824525833, -0.05264538526535034, 0...
0
0
0
0
MoDeGPT: Modular Decomposition for Large Language Model Compression
https://openreview.net/forum?id=8EfxjTCg2k
[ "Chi-Heng Lin", "Shangqian Gao", "James Seale Smith", "Abhishek Patel", "Shikhar Tuli", "Yilin Shen", "Hongxia Jin", "Yen-Chang Hsu" ]
Oral
Large Language Models (LLMs) have significantly advanced AI with their exceptional performance across a wide range of tasks. However, their extensive computational requirements restrict their use on devices with limited resources. While recent compression methods based on low-rank matrices show potential solutions, the...
LLM, model compression, matrix decomposition
A framework that expands matrix decomposition for LLM compression beyond SVD
9,191
2408.09632
[ -0.10228797048330307, -0.008810630068182945, 0.004921607207506895, 0.007988265715539455, 0.07038586586713791, -0.02475946769118309, -0.08877620846033096, 0.013057824224233627, 0.058970946818590164, -0.02579604834318161, -0.03276863694190979, 0.018405916169285774, 0.01377324853092432, 0.055...
0
0
0
0
MIND over Body: Adaptive Thinking using Dynamic Computation
https://openreview.net/forum?id=EjJGND0m1x
[ "Mrinal Mathur", "Barak A. Pearlmutter", "Sergey M. Plis" ]
Oral
While the human brain efficiently handles various computations with a limited number of neurons, traditional deep learning networks require a significant increase in parameters to improve performance. Yet, these parameters are used inefficiently as the networks employ the same amount of computation for inputs of the ...
Interpretability, Fixed points, Dynamic routing, Dynamic input processing, Deep Learning Framework
We introduce MIND model that dynamically adjusts computation based on input complexity using an Introspection Network. It outperforms traditional architectures emulating the brain’s resource allocation for improved efficiency and performance.
9,112
null
[ -0.04824502766132355, -0.048921577632427216, 0.031051410362124443, 0.024452203884720802, -0.0265795961022377, -0.09933363646268845, 0.011687866412103176, 0.015503029339015484, 0.0404827743768692, -0.05683290958404541, -0.0868314653635025, -0.030305273830890656, -0.021196512505412102, 0.038...
0
0
0
0
From Exploration to Mastery: Enabling LLMs to Master Tools via Self-Driven Interactions
https://openreview.net/forum?id=QKBu1BOAwd
[ "Changle Qu", "Sunhao Dai", "Xiaochi Wei", "Hengyi Cai", "Shuaiqiang Wang", "Dawei Yin", "Jun Xu", "Ji-Rong Wen" ]
Oral
Tool learning enables Large Language Models (LLMs) to interact with external environments by invoking tools, serving as an effective strategy to mitigate the limitations inherent in their pre-training data. In this process, tool documentation plays a crucial role by providing usage instructions for LLMs, thereby facili...
Large Language Model, Tool Learning, Learning from Experience
This paper proposes a novel framework aimed at dynamically adjusting and optimizing tool documentation based on the interaction feedback between LLMs and external tools.
8,990
2410.08197
[ 0.022456811740994453, -0.030867308378219604, 0.04056454077363014, 0.0442352220416069, 0.0666947215795517, -0.03356443718075752, -0.0242729764431715, 0.020383981987833977, 0.03476126119494438, -0.0054651787504553795, -0.07225122302770615, -0.056102775037288666, 0.07162407785654068, -0.00090...
https://github.com/quchangle1/DRAFT
30
0
0
0
LoRA Done RITE: Robust Invariant Transformation Equilibration for LoRA Optimization
https://openreview.net/forum?id=VpWki1v2P8
[ "Jui-Nan Yen", "Si Si", "Zhao Meng", "Felix Yu", "Sai Surya Duvvuri", "Inderjit S Dhillon", "Cho-Jui Hsieh", "Sanjiv Kumar" ]
Oral
Low-rank adaption (LoRA) is a widely used parameter-efficient finetuning method for LLM that reduces memory requirements. However, current LoRA optimizers lack transformation invariance, meaning the updates depending on how the two LoRA factors are scaled or rotated. This deficiency leads to inefficient learning and su...
optimization, LoRA
Improve the optimization of LoRA using adaptive matrix preconditioning method with transformation invariance
8,978
2410.20625
[ -0.04780855029821396, -0.02858181670308113, -0.016373811289668083, -0.019733529537916183, 0.003435470163822174, -0.03846394643187523, -0.05482101067900658, -0.07290344685316086, -0.0811406821012497, -0.01193143054842949, -0.02585281804203987, 0.02123415842652321, 0.013338073156774044, -0.0...
0
0
0
0
Scaling and evaluating sparse autoencoders
https://openreview.net/forum?id=tcsZt9ZNKD
[ "Leo Gao", "Tom Dupre la Tour", "Henk Tillman", "Gabriel Goh", "Rajan Troll", "Alec Radford", "Ilya Sutskever", "Jan Leike", "Jeffrey Wu" ]
Oral
Sparse autoencoders provide a promising unsupervised approach for extracting interpretable features from a language model by reconstructing activations from a sparse bottleneck layer. Since language models learn many concepts, autoencoders need to be very large to recover all relevant features. However, studying the pr...
interpretability, sparse autoencoders, superposition, scaling laws
null
8,937
2406.04093
[ -0.05083079636096954, -0.07214199006557465, -0.00789849553257227, 0.07767391949892044, 0.07971625030040741, 0.02545103058218956, -0.01286759041249752, -0.07864825427532196, -0.019829662516713142, -0.08997121453285217, -0.017344921827316284, -0.07999877631664276, -0.031182600185275078, 0.05...
https://github.com/openai/sparse_autoencoder
456
1
0
0
ReGenesis: LLMs can Grow into Reasoning Generalists via Self-Improvement
https://openreview.net/forum?id=YUYJsHOf3c
[ "XIANGYU PENG", "Congying Xia", "Xinyi Yang", "Caiming Xiong", "Chien-Sheng Wu", "Chen Xing" ]
Oral
Post-training Large Language Models (LLMs) with explicit reasoning trajectories can enhance their reasoning abilities. However, acquiring such high-quality trajectory data typically demands meticulous supervision from humans or superior models, which can be either expensive or license-constrained. In this paper, we exp...
LLM, reasoning, generalization, self-improvement
We propose ReGenesis, a method to self-synthesize reasoning paths as post-training data of LLMs by progressing from general reasoning structures to task-specific reasoning paths, to improve LLMs' generalization capability in reasoning.
8,842
2410.02108
[ 0.02465178817510605, -0.08986955136060715, 0.02793246880173683, 0.030044324696063995, 0.01480670552700758, -0.10496753454208374, 0.057006169110536575, 0.04087800160050392, -0.014966250397264957, 0.006867830641567707, -0.06290865689516068, -0.02144635282456875, 0.02401670627295971, 0.089841...
0
0
0
0
Feedback Favors the Generalization of Neural ODEs
https://openreview.net/forum?id=cmfyMV45XO
[ "Jindou Jia", "Zihan Yang", "Meng Wang", "Kexin Guo", "Jianfei Yang", "Xiang Yu", "Lei Guo" ]
Oral
The well-known generalization problem hinders the application of artificial neural networks in continuous-time prediction tasks with varying latent dynamics. In sharp contrast, biological systems can neatly adapt to evolving environments benefiting from real-time feedback mechanisms. Inspired by the feedback philosophy...
Neural ODEs, feedback, generalization, learning dynamical systems, model predictive control
null
8,713
2410.10253
[ -0.05334552749991417, -0.1223897859454155, 0.01907256618142128, 0.026505861431360245, -0.009975501336157322, -0.007722813170403242, -0.046493593603372574, -0.008197220973670483, 0.049519941210746765, 0.04502014070749283, 0.011893236078321934, 0.01137077622115612, 0.012109868228435516, 0.05...
0
0
0
0
Do LLMs Recognize Your Preferences? Evaluating Personalized Preference Following in LLMs
https://openreview.net/forum?id=QWunLKbBGF
[ "Siyan Zhao", "Mingyi Hong", "Yang Liu", "Devamanyu Hazarika", "Kaixiang Lin" ]
Oral
Large Language Models (LLMs) are increasingly deployed as chatbots, yet their ability to personalize responses to user preferences remains limited. We introduce PrefEval, a benchmark for evaluating LLMs' ability to infer, memorize and adhere to user preferences in long-context conversational setting. PrefEval comprises...
personalization, benchmark, Large language models, conversational llm, chatbots
null
8,673
2502.09597
[ -0.044275976717472076, -0.06715352088212967, 0.03242433816194534, 0.07164333015680313, 0.029216822236776352, -0.012293260544538498, 0.049891844391822815, 0.04555393010377884, 0.020156241953372955, -0.031800221651792526, -0.09584613144397736, -0.05065290257334709, 0.023851193487644196, 0.01...
https://github.com/amazon-science/PrefEval
16
0
3
0
STAR: Synthesis of Tailored Architectures
https://openreview.net/forum?id=HsHxSN23rM
[ "Armin W Thomas", "Rom Parnichkun", "Alexander Amini", "Stefano Massaroli", "Michael Poli" ]
Oral
Iterative improvement of model architectures is fundamental to deep learning: Transformers first enabled scaling, and recent advances in model hybridization have pushed the quality-efficiency frontier. However, optimizing architectures remains challenging and expensive, with a variety of automated or manual approaches ...
alternative architectures, deep signal processing, language models
We propose a new approach for automatic model architecture optimization (STAR), which combines a novel search space based on the theory of linear input-varying systems with a hierarchical numerical encoding into architecture genomes.
8,387
2411.17800
[ -0.09677116572856903, -0.05226508900523186, -0.011218962259590626, 0.012650124728679657, -0.024326203390955925, 0.017158931121230125, -0.1306464970111847, -0.04475099593400955, -0.04638252034783363, -0.1333903670310974, -0.060982756316661835, -0.10339856892824173, -0.04531025514006615, 0.0...
0
0
0
0
Do I Know This Entity? Knowledge Awareness and Hallucinations in Language Models
https://openreview.net/forum?id=WCRQFlji2q
[ "Javier Ferrando", "Oscar Balcells Obeso", "Senthooran Rajamanoharan", "Neel Nanda" ]
Oral
Hallucinations in large language models are a widespread problem, yet the mechanisms behind whether models will hallucinate are poorly understood, limiting our ability to solve this problem. Using sparse autoencoders as an interpretability tool, we discover that a key part of these mechanisms is entity recognition, whe...
Mechanistic Interpretability, Hallucinations, Language Models
We use sparse autoencoders to identify directions that encode entity recognition in language models.
8,375
2411.14257
[ -0.006374887656420469, -0.04152751341462135, 0.006378770340234041, 0.031505241990089417, 0.023793766275048256, 0.003783887019380927, 0.0711178332567215, -0.018513288348913193, 0.12255147099494934, -0.007557756267488003, -0.08725937455892563, -0.05691023916006088, -0.006274842657148838, 0.0...
0
0
0
1
Geometry of Neural Reinforcement Learning in Continuous State and Action Spaces
https://openreview.net/forum?id=AP0ndQloqR
[ "Saket Tiwari", "Omer Gottesman", "George Konidaris" ]
Oral
Advances in reinforcement learning (RL) have led to its successful application in complex tasks with continuous state and action spaces. Despite these advances in practice, most theoretical work pertains to finite state and action spaces. We propose building a theoretical understanding of continuous state and action sp...
reinforcement learning, deep learning, geometry
We demonstrate, theoretically and empirically, that an RL agent with neural network policy induces a low-dimensional structure to the states sampled as its trajectories for deterministic environment
8,290
null
[ -0.05738972872495651, -0.05974778160452843, 0.006626254878938198, -0.04759658873081207, -0.05342893674969673, 0.14511588215827942, 0.03095145896077156, -0.04514763504266739, 0.023864788934588432, 0.03265151381492615, 0.010795229114592075, -0.026412690058350563, 0.030016008764505386, 0.1002...
0
0
0
0
When is Task Vector Provably Effective for Model Editing? A Generalization Analysis of Nonlinear Transformers
https://openreview.net/forum?id=vRvVVb0NAz
[ "Hongkang Li", "Yihua Zhang", "Shuai Zhang", "Pin-Yu Chen", "Sijia Liu", "Meng Wang" ]
Oral
Task arithmetic refers to editing the pre-trained model by adding a weighted sum of task vectors, each of which is the weight update from the pre-trained model to fine-tuned models for certain tasks. This approach recently gained attention as a computationally efficient inference method for model editing, e.g., multi-t...
Task arithmetic, generalization, nonlinear Transformers, deep learning theory, machine unlearning
We provide the first theoretical characterization of the generalization guarantees of task vector methods on nonlinear Transformers.
8,179
null
[ -0.054212767630815506, -0.0904635414481163, -0.015423164702951908, 0.02491295337677002, 0.0499015711247921, 0.024063246324658394, 0.0032628194894641638, -0.048862505704164505, 0.035339757800102234, 0.10100848227739334, -0.03805972635746002, -0.005082751624286175, 0.007087967824190855, 0.06...
0
0
0
0
Learning and aligning single-neuron invariance manifolds in visual cortex
https://openreview.net/forum?id=kbjJ9ZOakb
[ "Mohammad Bashiri", "Luca Baroni", "Ján Antolík", "Fabian H. Sinz" ]
Oral
Understanding how sensory neurons exhibit selectivity to certain features and invariance to others is central to uncovering the computational principles underlying robustness and generalization in visual perception. Most existing methods for characterizing selectivity and invariance identify single or finite discrete s...
neural invariances, invariance manifold, MEI, implicit neural representations, contrastive learning, invariance alignment, clustering, visual cortex, macaque V1, primary visual cortex
Our method learns single-neuron invariances and aligns them, enabling population-level exploration of neural invariances.
8,146
null
[ -0.03296505659818649, -0.17560993134975433, 0.10250895470380783, 0.010258709080517292, 0.028560064733028412, 0.005658241454511881, 0.06422753632068634, -0.09743273258209229, 0.07730850577354431, -0.008020252920687199, 0.010310240089893341, -0.0749308243393898, -0.01183069683611393, 0.09067...
0
0
0
0
Feedback Schrödinger Bridge Matching
https://openreview.net/forum?id=k3tbMMW8rH
[ "Panagiotis Theodoropoulos", "Nikolaos Komianos", "Vincent Pacelli", "Guan-Horng Liu", "Evangelos Theodorou" ]
Oral
Recent advancements in diffusion bridges for distribution transport problems have heavily relied on matching frameworks, yet existing methods often face a trade-off between scalability and access to optimal pairings during training. Fully unsupervised methods make minimal assumptions but incur high computational costs...
Diffusion models, Schrödinger bridge, Distribution matching, Semi-Supervised Learning
We introduce Feedback Schrödinger Bridge Matching, a novel semi-supervised framework that uses pre-aligned pairs to guide the matching of non-coupled samples.
8,128
null
[ -0.0353582464158535, -0.045668892562389374, 0.03559865057468414, 0.057248663157224655, 0.02624988742172718, 0.1062358170747757, -0.02938905917108059, -0.04737425595521927, -0.05034956708550453, -0.04064399376511574, -0.07039449363946915, -0.06232233718037605, -0.028486771509051323, 0.06220...
0
0
0
0
TetSphere Splatting: Representing High-Quality Geometry with Lagrangian Volumetric Meshes
https://openreview.net/forum?id=8enWnd6Gp3
[ "Minghao Guo", "Bohan Wang", "Kaiming He", "Wojciech Matusik" ]
Oral
We introduce TetSphere Splatting, a Lagrangian geometry representation designed for high-quality 3D shape modeling. TetSphere splatting leverages an underused yet powerful geometric primitive -- volumetric tetrahedral meshes. It represents 3D shapes by deforming a collection of tetrahedral spheres, with geometric regul...
geometry representation, 3D modeling
null
8,079
2405.20283
[ 0.013274728320538998, 0.007746098563075066, 0.0493445061147213, -0.03806494176387787, 0.04327322170138359, -0.047684017568826675, -0.06460752338171005, -0.020692016929388046, 0.06440349668264389, -0.014268751256167889, -0.07711813598871231, -0.041533343493938446, -0.028226729482412338, 0.1...
0
0
0
0
Navigating the Digital World as Humans Do: Universal Visual Grounding for GUI Agents
https://openreview.net/forum?id=kxnoqaisCT
[ "Boyu Gou", "Ruohan Wang", "Boyuan Zheng", "Yanan Xie", "Cheng Chang", "Yiheng Shu", "Huan Sun", "Yu Su" ]
Oral
Multimodal large language models (MLLMs) are transforming the capabilities of graphical user interface (GUI) agents, facilitating their transition from controlled simulations to complex, real-world applications across various platforms. However, the effectiveness of these agents hinges on the robustness of their ground...
GUI Agents, Visual Grounding, Multimodal Large Language Models, GUI Grounding, Large Language Model
null
8,073
2410.05243
[ -0.016777221113443375, -0.1352066546678543, 0.020690467208623886, -0.02909230999648571, 0.07910242676734924, -0.08517547696828842, -0.021127641201019287, 0.043310120701789856, 0.03421204537153244, -0.09939120709896088, -0.08582869172096252, -0.11172070354223251, 0.0633080005645752, 0.01547...
https://github.com/OSU-NLP-Group/UGround
212
4
0
3
Progressive distillation induces an implicit curriculum
https://openreview.net/forum?id=wPMRwmytZe
[ "Abhishek Panigrahi", "Bingbin Liu", "Sadhika Malladi", "Andrej Risteski", "Surbhi Goel" ]
Oral
Knowledge distillation leverages a teacher model to improve the training of a student model. A persistent challenge is that a better teacher does not always yield a better student, to which a common mitigation is to use additional supervision from several “intermediate” teachers. One empirically validated variant of th...
knowledge distillation, feature learning, curriculum, sparse parity, PCFG, optimization, MLP, Transformer
Progressive distillation accelerates the student model's training by providing an implicit curriculum through intermediate teacher checkpoints.
8,004
2410.05464
[ -0.0122190797701478, -0.13345251977443695, 0.046602509915828705, 0.05254792049527168, 0.054699260741472244, -0.03779703751206398, 0.016389725729823112, -0.0325053371489048, -0.009199598804116249, -0.017870573326945305, 0.009098234586417675, 0.018352428451180458, 0.026031436398625374, 0.022...
0
0
0
0
Rethinking Reward Modeling in Preference-based Large Language Model Alignment
https://openreview.net/forum?id=rfdblE10qm
[ "Hao Sun", "Yunyi Shen", "Jean-Francois Ton" ]
Oral
The Bradley-Terry (BT) model is a common and successful practice in reward modeling for Large Language Model (LLM) alignment. However, it remains unclear *why* this model --- originally developed for multi-player stochastic game matching --- can be adopted to convert pairwise response comparisons to reward values and m...
Bradley-Terry Model, Reward Modeling, Large Language Models
null
7,959
null
[ -0.07900405675172806, -0.1078033298254013, 0.0070032114163041115, -0.06707178801298141, -0.01670834794640541, 0.11092918366193771, -0.0290711410343647, 0.0015516221756115556, 0.04718587547540665, -0.028819335624575615, -0.07969310879707336, -0.04763161391019821, 0.07714570313692093, 0.0589...
0
0
0
0
Copyright-Protected Language Generation via Adaptive Model Fusion
https://openreview.net/forum?id=kRoWeLTpL4
[ "Javier Abad", "Konstantin Donhauser", "Francesco Pinto", "Fanny Yang" ]
Oral
The risk of language models reproducing copyrighted material from their training data has led to the development of various protective measures. Among these, inference-time strategies that impose constraints via post-processing have shown promise in addressing the complexities of copyright regulation. However, they oft...
language models, copyright, model fusion, memorization, safety, privacy
null
7,825
2412.06619
[ -0.15822598338127136, -0.023303478956222534, -0.04558616504073143, 0.05932566523551941, 0.05849519371986389, 0.031427036970853806, -0.048188306391239166, -0.052727650851011276, 0.06357182562351227, -0.02266421727836132, -0.009461942128837109, 0.015417718328535557, 0.10964278131723404, -0.0...
https://github.com/jaabmar/cp_fuse
4
0
0
0
Ctrl-Adapter: An Efficient and Versatile Framework for Adapting Diverse Controls to Any Diffusion Model
https://openreview.net/forum?id=ny8T8OuNHe
[ "Han Lin", "Jaemin Cho", "Abhay Zala", "Mohit Bansal" ]
Oral
ControlNets are widely used for adding spatial control to text-to-image diffusion models. However, when it comes to controllable video generation, ControlNets cannot be directly integrated into new backbones due to feature space mismatches, and training ControlNets for new backbones can be a significant burden for many...
Adapter, Diffusion, ControlNet, Text-to-video Generation, Image-to-video Generation, Text-to-image Generation
null
7,733
null
[ 0.0206396896392107, -0.12277720868587494, 0.01512193027883768, 0.03451398387551308, 0.06956096738576889, 0.0477975495159626, -0.0745662972331047, -0.07359364628791809, 0.07287636399269104, -0.02378181181848049, 0.02638433501124382, -0.01261318288743496, -0.014932140707969666, 0.04239378124...
0
0
0
0
BIRD: A Trustworthy Bayesian Inference Framework for Large Language Models
https://openreview.net/forum?id=fAAaT826Vv
[ "Yu Feng", "Ben Zhou", "Weidong Lin", "Dan Roth" ]
Oral
Predictive models often need to work with incomplete information in real-world tasks. Consequently, they must provide reliable probability or confidence estimation, especially in large-scale decision-making and planning tasks. Current large language models (LLMs) are insufficient for accurate estimations, but they can ...
Large language models, Reasoning, Planning, Trustworthiness, Interpretability, Probability Estimation, Bayesian Methods
null
7,711
2404.12494
[ -0.03317867964506149, -0.08646712452173233, 0.07456286996603012, 0.02510131523013115, 0.08594582229852676, -0.003432556288316846, -0.0011651889653876424, -0.013519203290343285, 0.042186152189970016, -0.002679175464436412, -0.042359910905361176, -0.054675567895174026, 0.05314664542675018, 0...
0
0
0
0
LaMPlace: Learning to Optimize Cross-Stage Metrics in Macro Placement
https://openreview.net/forum?id=YLIsIzC74j
[ "Zijie Geng", "Jie Wang", "Ziyan Liu", "Siyuan Xu", "Zhentao Tang", "Shixiong Kai", "Mingxuan Yuan", "Jianye HAO", "Feng Wu" ]
Oral
Machine learning techniques have shown great potential in enhancing macro placement, a critical stage in modern chip design. However, existing methods primarily focus on *online* optimization of *intermediate surrogate metrics* that are available at the current placement stage, rather than directly targeting the *cross...
Macro placement, Chip design, EDA
We propose a learning-based method for optimizing cross-stage metrics in macro placement.
7,707
null
[ 0.008062463253736496, -0.028992999345064163, 0.015776434913277626, 0.022806227207183838, 0.04389747232198715, -0.05502326413989067, -0.032911062240600586, 0.01063893549144268, -0.06531410664319992, -0.02471105195581913, -0.05196776241064072, -0.051824890077114105, -0.016542933881282806, 0....
0
0
0
0
miniCTX: Neural Theorem Proving with (Long-)Contexts
https://openreview.net/forum?id=KIgaAqEFHW
[ "Jiewen Hu", "Thomas Zhu", "Sean Welleck" ]
Oral
Real-world formal theorem proving often depends on a wealth of context, including definitions, lemmas, comments, file structure, and other information. We introduce $\texttt{miniCTX}$, which tests a model's ability to prove formal mathematical theorems that depend on new context that is not seen during training. $\text...
Neural theorem proving, Formal mathematics, Benchmark dataset
We introduce a context-rich dataset and toolkit for evaluation of neural theorem proving under a more realistic scenario.
7,565
null
[ -0.13705064356327057, 0.031196799129247665, 0.024180704727768898, 0.015809308737516403, 0.06612766534090042, 0.017023177817463875, 0.0258252564817667, 0.06612937897443771, -0.01410945225507021, 0.0028039151802659035, -0.06935737282037735, -0.057328302413225174, 0.04247839003801346, 0.02571...
0
0
0
0
BigCodeBench: Benchmarking Code Generation with Diverse Function Calls and Complex Instructions
https://openreview.net/forum?id=YrycTjllL0
[ "Terry Yue Zhuo", "Vu Minh Chien", "Jenny Chim", "Han Hu", "Wenhao Yu", "Ratnadira Widyasari", "Imam Nur Bani Yusuf", "Haolan Zhan", "Junda He", "Indraneil Paul", "Simon Brunner", "Chen GONG", "James Hoang", "Armel Randy Zebaze", "Xiaoheng Hong", "Wen-Ding Li", "Jean Kaddour", "Min...
Oral
Task automation has been greatly empowered by the recent advances in Large Language Models (LLMs) via Python code, where the tasks range from software engineering development to general-purpose reasoning. While current benchmarks have shown that LLMs can solve tasks using programs like human developers, the majority of...
Code Generation, Tool Use, Instruction Following, Benchmark
null
7,553
2406.15877
[ -0.04171590134501457, -0.012095627374947071, 0.018297633156180382, 0.04362351447343826, 0.02068931981921196, -0.1356925517320633, -0.06065598875284195, 0.013648243620991707, -0.06208766624331474, -0.01638268493115902, -0.08488892018795013, -0.11906464397907257, 0.009549309499561787, 0.0022...
https://github.com/bigcode-project/bigcodebench-annotation
21
0
4
11
Towards a Complete Logical Framework for GNN Expressiveness
https://openreview.net/forum?id=pqOjj90Vwp
[ "Tuo Xu" ]
Oral
Designing expressive Graph neural networks (GNNs) is an important topic in graph machine learning fields. Traditionally, the Weisfeiler-Lehman (WL) test has been the primary measure for evaluating GNN expressiveness. However, high-order WL tests can be obscure, making it challenging to discern the specific graph patter...
graph neural networks, logic
Analyze the logical expressiveness of arbitrary graph neural networks
7,534
null
[ -0.006193577311933041, -0.04472825676202774, 0.05913880467414856, 0.03237459808588028, 0.03994143381714821, -0.017174139618873596, -0.005707179196178913, -0.09697847813367844, -0.0522092804312706, -0.05994293466210365, -0.017254479229450226, -0.041682906448841095, -0.008189592510461807, 0....
0
0
0
0
Representation Alignment for Generation: Training Diffusion Transformers Is Easier Than You Think
https://openreview.net/forum?id=DJSZGGZYVi
[ "Sihyun Yu", "Sangkyung Kwak", "Huiwon Jang", "Jongheon Jeong", "Jonathan Huang", "Jinwoo Shin", "Saining Xie" ]
Oral
Recent studies have shown that the denoising process in (generative) diffusion models can induce meaningful (discriminative) representations inside the model, though the quality of these representations still lags behind those learned through recent self-supervised learning methods. We argue that one main bottleneck in...
Diffusion models, Representation learning
null
7,519
2410.06940
[ -0.09440325200557709, -0.10839608311653137, 0.01899655908346176, -0.005671923514455557, 0.009989392012357712, 0.018220817670226097, -0.07412641495466232, -0.07837681472301483, 0.05311155319213867, -0.08401204645633698, -0.01667184568941593, -0.029612068086862564, 0.002613203600049019, 0.04...
https://github.com/sihyun-yu/REPA
971
0
0
0
Classic but Everlasting: Traditional Gradient-Based Algorithms Converges Fast Even in Time-Varying Multi-Player Games
https://openreview.net/forum?id=t8FG4cJuL3
[ "Yanzheng Chen", "Jun Yu" ]
Oral
Last-iterate convergence behaviours of well-known algorithms are intensively investigated in various games, such as two-player bilinear zero-sum games. However, most known last-iterate convergence properties rely on strict settings where the underlying games must have time-invariant payoffs. Besides, the limited known ...
time-varying games, Nash equilibrium, extra gradient algorithm, optimistic gradient algorithm
null
7,489
null
[ -0.06554250419139862, -0.030323559418320656, -0.00010909232514677569, -0.055214397609233856, 0.04965351149439812, -0.0023562072310596704, 0.03694998472929001, -0.013448821380734444, 0.07848136872053146, 0.005190597847104073, -0.06230312958359718, 0.06744252890348434, -0.039870522916316986, ...
0
0
0
0
DSPO: Direct Score Preference Optimization for Diffusion Model Alignment
https://openreview.net/forum?id=xyfb9HHvMe
[ "Huaisheng Zhu", "Teng Xiao", "Vasant G Honavar" ]
Oral
Diffusion-based Text-to-Image (T2I) models have achieved impressive success in generating high-quality images from textual prompts. While large language models (LLMs) effectively leverage Direct Preference Optimization (DPO) for fine-tuning on human preference data without the need for reward models, diffusion models h...
Text-to-image generation
null
7,413
null
[ -0.028013555333018303, -0.07777834683656693, 0.022811461240053177, 0.024067675694823265, 0.0588504783809185, 0.012977741658687592, -0.03201677277684212, 0.049231063574552536, 0.0785655751824379, -0.021499445661902428, -0.011920848861336708, -0.04804694280028343, 0.061235323548316956, 0.079...
0
0
0
0
TANGO: Co-Speech Gesture Video Reenactment with Hierarchical Audio Motion Embedding and Diffusion Interpolation
https://openreview.net/forum?id=LbEWwJOufy
[ "Haiyang Liu", "Xingchao Yang", "Tomoya Akiyama", "Yuantian Huang", "Qiaoge Li", "Shigeru Kuriyama", "Takafumi Taketomi" ]
Oral
We present TANGO, a framework for generating co-speech body-gesture videos. Given a few-minute, single-speaker reference video and target speech audio, TANGO produces high-fidelity videos with synchronized body gestures. TANGO builds on Gesture Video Reenactment (GVR), which splits and retrieves video clips using a dir...
co-speech video generation, cross-modal retrieval, audio repsentation learning, motion repsentation learning, video frame interpolation
null
7,366
2410.04221
[ -0.06344190984964371, -0.1583826094865799, 0.024966789409518242, -0.08356618136167526, 0.08915155380964279, 0.00015364738646894693, -0.06634741276502609, -0.12013374269008636, 0.028198091313242912, -0.1461312621831894, 0.024013174697756767, -0.06727731972932816, -0.08770925551652908, 0.057...
0
0
0
7
Predictive Inverse Dynamics Models are Scalable Learners for Robotic Manipulation
https://openreview.net/forum?id=meRCKuUpmc
[ "Yang Tian", "Sizhe Yang", "Jia Zeng", "Ping Wang", "Dahua Lin", "Hao Dong", "Jiangmiao Pang" ]
Oral
Current efforts to learn scalable policies in robotic manipulation primarily fall into two categories: one focuses on "action," which involves behavior cloning from extensive collections of robotic data, while the other emphasizes "vision," enhancing model generalization by pre-training representations or generative mo...
Robotic Manipulation ; Pre-training ; Visual Foresight ; Inverse Dynamics ; Large-scale robot dataset
null
7,358
2412.15109
[ -0.06657740473747253, -0.13826189935207367, 0.0555603951215744, 0.028268953785300255, -0.003497960977256298, -0.0018648006953299046, -0.03263436257839203, 0.019342519342899323, 0.008944697678089142, 0.04777808487415314, -0.014738580211997032, -0.042729903012514114, -0.009697823785245419, 0...
https://github.com/openrobotlab/seer
164
0
0
0
The Complexity of Two-Team Polymatrix Games with Independent Adversaries
https://openreview.net/forum?id=9VGTk2NYjF
[ "Alexandros Hollender", "Gilbert Maystre", "Sai Ganesh Nagarajan" ]
Oral
Adversarial multiplayer games are an important object of study in multiagent learning. In particular, polymatrix zero-sum games are a multiplayer setting where Nash equilibria are known to be efficiently computable. Towards understanding the limits of tractability in polymatrix games, we study the computation of Nash e...
algorithmic game theory, Nash equilibrium, minmax optimization
null
7,295
2409.07398
[ 0.024471664801239967, -0.08182499557733536, -0.12963928282260895, -0.07233412563800812, 0.023924553766846657, 0.011418716982007027, 0.017278777435421944, -0.08325273543596268, 0.030321044847369194, 0.04553767293691635, -0.029778502881526947, 0.030893484130501747, 0.006674001459032297, 0.06...
0
0
0
0
MMQA: Evaluating LLMs with Multi-Table Multi-Hop Complex Questions
https://openreview.net/forum?id=GGlpykXDCa
[ "Jian Wu", "Linyi Yang", "Dongyuan Li", "Yuliang Ji", "Manabu Okumura", "Yue Zhang" ]
Oral
While large language models (LLMs) have made strides in understanding tabular data, current tabular evaluation benchmarks, such as WikiTableQuestions and WikiSQL, are focus on single-table scenarios, which cannot necessarily reflect the complexity of real-world applications. To bridge this gap, we present a \textbf{M}u...
LLM evaluation, multi-table question answering; multi-hop question answering
A novel multi-table evaluation benchmark that evaluate LLMs' multi-table understanding and reasoning ability.
7,281
null
[ -0.02934744767844677, -0.08121498674154282, -0.03512432426214218, 0.029858652502298355, 0.04434993490576744, 0.037718676030635834, -0.008812956511974335, 0.01598503068089485, -0.02262089028954506, 0.002641688333824277, -0.03021354228258133, -0.0970788300037384, 0.0432286262512207, 0.035448...
0
0
0
0
On Scaling Up 3D Gaussian Splatting Training
https://openreview.net/forum?id=pQqeQpMkE7
[ "Hexu Zhao", "Haoyang Weng", "Daohan Lu", "Ang Li", "Jinyang Li", "Aurojit Panda", "Saining Xie" ]
Oral
3D Gaussian Splatting (3DGS) is increasingly popular for 3D reconstruction due to its superior visual quality and rendering speed. However, 3DGS training currently occurs on a single GPU, limiting its ability to handle high-resolution and large-scale 3D reconstruction tasks due to memory constraints. We introduce Grend...
Gaussian Splatting, Machine Learning System, Distributed Training
We describe a scalable distributed training system for 3D Guassian Splatting.
7,162
2406.18533
[ -0.06433776021003723, -0.0989738330245018, 0.0071748592890799046, -0.021891985088586807, 0.01927432231605053, -0.08432116359472275, -0.019537461921572685, -0.030335677787661552, -0.03401191160082817, -0.05606852471828461, -0.04495905712246895, -0.02250686101615429, 0.0038969675078988075, 0...
https://github.com/nyu-systems/grendel-gs
516
0
0
0
Emergence of meta-stable clustering in mean-field transformer models
https://openreview.net/forum?id=eBS3dQQ8GV
[ "Giuseppe Bruno", "Federico Pasqualotto", "Andrea Agazzi" ]
Oral
We model the evolution of tokens within a deep stack of Transformer layers as a continuous-time flow on the unit sphere, governed by a mean-field interacting particle system, building on the framework introduced in Geshkovski et al. (2023). Studying the corresponding mean-field Partial Differential Equation (PDE), whic...
Mean-field limits, Transformers, Meta-stability, Clustering
null
7,124
2410.23228
[ -0.07450608909130096, -0.04723397642374039, 0.07168866693973541, 0.04749920964241028, 0.05832790210843086, 0.027927406132221222, -0.08770138025283813, 0.08643690496683121, 0.11285391449928284, -0.03449639305472374, -0.017498396337032318, 0.021830150857567787, -0.0064332736656069756, 0.0119...
0
0
0
0
Wide Neural Networks Trained with Weight Decay Provably Exhibit Neural Collapse
https://openreview.net/forum?id=1HCN4pjTb4
[ "Arthur Jacot", "Peter Súkeník", "Zihan Wang", "Marco Mondelli" ]
Oral
Deep neural networks (DNNs) at convergence consistently represent the training data in the last layer via a geometric structure referred to as neural collapse. This empirical evidence has spurred a line of theoretical research aimed at proving the emergence of neural collapse, mostly focusing on the unconstrained featu...
neural collapse, gradient descent training, weight decay, balancedness
We consider deep neural networks with at least two final linear layers, and we show that neural collapse provably holds in the end-to-end training of the model with weight decay
7,009
2410.04887
[ -0.11839079856872559, -0.055252622812986374, 0.09691417962312698, 0.06701134890317917, 0.08028646558523178, 0.04934646189212799, -0.05470225587487221, -0.019647721201181412, 0.013599413447082043, -0.04442286118865013, -0.013481452129781246, 0.04812111333012581, 0.021745944395661354, -0.015...
0
0
0
0
ECD: A Machine Learning Benchmark for Predicting Enhanced-Precision Electronic Charge Density in Crystalline Inorganic Materials
https://openreview.net/forum?id=SBCMNc3Mq3
[ "Pin Chen", "Zexin Xu", "Qing Mo", "Hongjin Zhong", "Fengyang Xu", "Yutong Lu" ]
Oral
Supervised machine learning techniques are increasingly being adopted to speed up electronic structure predictions, serving as alternatives to first-principles methods like Density Functional Theory (DFT). Although current DFT datasets mainly emphasize chemical properties and atomic forces, the precise prediction of el...
Electronic Charge Density, Crystalline Inorganic Materials, Graph Neural Network, Dataset
A Machine Learning Benchmark for Predicting Enhanced-Precision Electronic Charge Density in Crystalline Inorganic Materials
6,903
null
[ -0.045085225254297256, -0.039107076823711395, 0.014659601263701916, 0.06280644237995148, 0.0669233649969101, -0.041637811809778214, 0.01918724924325943, 0.005758386105298996, -0.02410193718969822, -0.016486041247844696, -0.056795384734869, -0.054688602685928345, -0.007578206714242697, 0.05...
0
0
0
0
On the Benefits of Memory for Modeling Time-Dependent PDEs
https://openreview.net/forum?id=o9kqa5K3tB
[ "Ricardo Buitrago", "Tanya Marwah", "Albert Gu", "Andrej Risteski" ]
Oral
Data-driven techniques have emerged as a promising alternative to traditional numerical methods for solving PDEs. For time-dependent PDEs, many approaches are Markovian---the evolution of the trained system only depends on the current state, and not the past states. In this work, we investigate the benefits of using m...
State Space Models, Partial Differential Equations
null
6,848
2409.02313
[ -0.08543740212917328, 0.0003199729253537953, 0.04736948013305664, 0.10126020759344101, 0.010474911890923977, 0.0265519842505455, -0.06614495068788528, 0.008845538832247257, 0.021311352029442787, -0.002925272798165679, 0.007472162134945393, 0.029370317235589027, -0.009509755298495293, 0.037...
0
0
0
0