arxiv_id float64 1.5k 2.51k | title stringlengths 9 178 ⌀ | authors stringlengths 2 22.8k | categories stringlengths 4 146 | summary stringlengths 103 1.92k ⌀ | published stringdate 2015-02-06 10:44:00 2025-07-10 17:59:58 ⌀ | comments stringlengths 2 417 ⌀ | journal_ref stringclasses 321
values | doi stringclasses 398
values | ss_title stringlengths 8 159 ⌀ | ss_authors stringlengths 11 8.38k ⌀ | ss_year float64 2.02k 2.03k ⌀ | ss_venue stringclasses 281
values | ss_citationCount float64 0 134k ⌀ | ss_referenceCount float64 0 429 ⌀ | ss_fieldsOfStudy stringclasses 47
values |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2,410.01679 | VinePPO: Refining Credit Assignment in RL Training of LLMs | ['Amirhossein Kazemnejad', 'Milad Aghajohari', 'Eva Portelance', 'Alessandro Sordoni', 'Siva Reddy', 'Aaron Courville', 'Nicolas Le Roux'] | ['cs.LG', 'cs.CL'] | Large language models (LLMs) are increasingly applied to complex reasoning
tasks that require executing several complex steps before receiving any reward.
Properly assigning credit to these steps is essential for enhancing model
performance. Proximal Policy Optimization (PPO), a common reinforcement
learning (RL) algor... | 2024-10-02T15:49:30Z | Accepted at ICML 2025; 12 pages and 22 pages Appendix | null | null | null | null | null | null | null | null | null |
2,410.0168 | PHI-S: Distribution Balancing for Label-Free Multi-Teacher Distillation | ['Mike Ranzinger', 'Jon Barker', 'Greg Heinrich', 'Pavlo Molchanov', 'Bryan Catanzaro', 'Andrew Tao'] | ['cs.LG', 'cs.AI', 'cs.CV'] | Various visual foundation models have distinct strengths and weaknesses, both
of which can be improved through heterogeneous multi-teacher knowledge
distillation without labels, termed "agglomerative models." We build upon this
body of work by studying the effect of the teachers' activation statistics,
particularly the... | 2024-10-02T15:50:35Z | null | null | null | PHI-S: Distribution Balancing for Label-Free Multi-Teacher Distillation | ['Michael Ranzinger', 'Jon Barker', 'Greg Heinrich', 'Pavlo Molchanov', 'Bryan Catanzaro', 'Andrew Tao'] | 2,024 | arXiv.org | 5 | 51 | ['Computer Science'] |
2,410.01691 | FactAlign: Long-form Factuality Alignment of Large Language Models | ['Chao-Wei Huang', 'Yun-Nung Chen'] | ['cs.CL', 'cs.AI'] | Large language models have demonstrated significant potential as the
next-generation information access engines. However, their reliability is
hindered by issues of hallucination and generating non-factual content. This is
particularly problematic in long-form responses, where assessing and ensuring
factual accuracy is... | 2024-10-02T16:03:13Z | Accepted to EMNLP 2024 Findings | null | null | null | null | null | null | null | null | null |
2,410.01744 | Leopard: A Vision Language Model For Text-Rich Multi-Image Tasks | ['Mengzhao Jia', 'Wenhao Yu', 'Kaixin Ma', 'Tianqing Fang', 'Zhihan Zhang', 'Siru Ouyang', 'Hongming Zhang', 'Dong Yu', 'Meng Jiang'] | ['cs.CV', 'cs.CL'] | Text-rich images, where text serves as the central visual element guiding the
overall understanding, are prevalent in real-world applications, such as
presentation slides, scanned documents, and webpage snapshots. Tasks involving
multiple text-rich images are especially challenging, as they require not only
understandi... | 2024-10-02T16:55:01Z | Our code is available at https://github.com/tencent-ailab/Leopard | null | null | Leopard: A Vision Language Model For Text-Rich Multi-Image Tasks | ['Mengzhao Jia', 'Wenhao Yu', 'Kaixin Ma', 'Tianqing Fang', 'Zhihan Zhang', 'Siru Ouyang', 'Hongming Zhang', 'Meng Jiang', 'Dong Yu'] | 2,024 | Trans. Mach. Learn. Res. | 7 | 79 | ['Computer Science'] |
2,410.01912 | A Spark of Vision-Language Intelligence: 2-Dimensional Autoregressive
Transformer for Efficient Finegrained Image Generation | ['Liang Chen', 'Sinan Tan', 'Zefan Cai', 'Weichu Xie', 'Haozhe Zhao', 'Yichi Zhang', 'Junyang Lin', 'Jinze Bai', 'Tianyu Liu', 'Baobao Chang'] | ['cs.CV', 'cs.AI', 'cs.CL'] | This work tackles the information loss bottleneck of vector-quantization (VQ)
autoregressive image generation by introducing a novel model architecture
called the 2-Dimensional Autoregression (DnD) Transformer. The DnD-Transformer
predicts more codes for an image by introducing a new autoregression direction,
\textit{m... | 2024-10-02T18:10:05Z | 25 pages, 20 figures, code is open at
https://github.com/chenllliang/DnD-Transformer | null | null | A Spark of Vision-Language Intelligence: 2-Dimensional Autoregressive Transformer for Efficient Finegrained Image Generation | ['Liang Chen', 'Sinan Tan', 'Zefan Cai', 'Weichu Xie', 'Haozhe Zhao', 'Yichi Zhang', 'Junyang Lin', 'Jinze Bai', 'Tianyu Liu', 'Baobao Chang'] | 2,024 | International Conference on Learning Representations | 4 | 47 | ['Computer Science'] |
2,410.02073 | Depth Pro: Sharp Monocular Metric Depth in Less Than a Second | ['Aleksei Bochkovskii', 'Amaël Delaunoy', 'Hugo Germain', 'Marcel Santos', 'Yichao Zhou', 'Stephan R. Richter', 'Vladlen Koltun'] | ['cs.CV', 'cs.LG'] | We present a foundation model for zero-shot metric monocular depth
estimation. Our model, Depth Pro, synthesizes high-resolution depth maps with
unparalleled sharpness and high-frequency details. The predictions are metric,
with absolute scale, without relying on the availability of metadata such as
camera intrinsics. ... | 2024-10-02T22:42:20Z | Published at ICLR 2025. Code and weights available at
https://github.com/apple/ml-depth-pro | null | null | null | null | null | null | null | null | null |
2,410.02082 | FARM: Functional Group-Aware Representations for Small Molecules | ['Thao Nguyen', 'Kuan-Hao Huang', 'Ge Liu', 'Martin D. Burke', 'Ying Diao', 'Heng Ji'] | ['cs.LG', 'q-bio.QM'] | We introduce Functional Group-Aware Representations for Small Molecules
(FARM), a novel foundation model designed to bridge the gap between SMILES,
natural language, and molecular graphs. The key innovation of FARM lies in its
functional group-aware tokenization, which directly incorporates functional
group information... | 2024-10-02T23:04:58Z | Preprint | null | null | FARM: Functional Group-Aware Representations for Small Molecules | ['Thao Nguyen', 'Kuan-Hao Huang', 'Ge Liu', 'Martin Burke', 'Ying Diao', 'Heng Ji'] | 2,024 | arXiv.org | 1 | 42 | ['Computer Science', 'Biology'] |
2,410.02089 | RLEF: Grounding Code LLMs in Execution Feedback with Reinforcement
Learning | ['Jonas Gehring', 'Kunhao Zheng', 'Jade Copet', 'Vegard Mella', 'Quentin Carbonneaux', 'Taco Cohen', 'Gabriel Synnaeve'] | ['cs.CL', 'cs.AI'] | Large language models (LLMs) deployed as agents solve user-specified tasks
over multiple steps while keeping the required manual engagement to a minimum.
Crucially, such LLMs need to ground their generations in any feedback obtained
to reliably achieve the desired outcomes. We propose an end-to-end
reinforcement learni... | 2024-10-02T23:25:17Z | Add repair model ablation, update related work | null | null | RLEF: Grounding Code LLMs in Execution Feedback with Reinforcement Learning | ['Jonas Gehring', 'Kunhao Zheng', 'Jade Copet', 'Vegard Mella', 'Taco Cohen', 'Gabriele Synnaeve'] | 2,024 | arXiv.org | 36 | 48 | ['Computer Science'] |
2,410.02131 | Boosting Masked ECG-Text Auto-Encoders as Discriminative Learners | ['Hung Manh Pham', 'Aaqib Saeed', 'Dong Ma'] | ['cs.LG', 'cs.CL'] | The accurate interpretation of Electrocardiogram (ECG) signals is pivotal for
diagnosing cardiovascular diseases. Integrating ECG signals with accompanying
textual reports further holds immense potential to enhance clinical diagnostics
by combining physiological data and qualitative insights. However, this
integration ... | 2024-10-03T01:24:09Z | Accepted at ICML 2025 | null | null | null | null | null | null | null | null | null |
2,410.02197 | Beyond Bradley-Terry Models: A General Preference Model for Language
Model Alignment | ['Yifan Zhang', 'Ge Zhang', 'Yue Wu', 'Kangping Xu', 'Quanquan Gu'] | ['cs.AI', 'cs.CL', 'cs.LG'] | Modeling human preferences is crucial for aligning foundation models with
human values. Traditional reward modeling methods, such as the Bradley-Terry
(BT) reward model, fall short in expressiveness, particularly in addressing
intransitive preferences. In this paper, we introduce preference embedding, an
approach that ... | 2024-10-03T04:22:55Z | Accepted to the 42nd International Conference on Machine Learning
(ICML 2025) | null | null | Beyond Bradley-Terry Models: A General Preference Model for Language Model Alignment | ['Yifan Zhang', 'Ge Zhang', 'Yue Wu', 'Kangping Xu', 'Quanquan Gu'] | 2,024 | null | 2 | 62 | ['Computer Science'] |
2,410.02249 | Spiking Neural Network as Adaptive Event Stream Slicer | ['Jiahang Cao', 'Mingyuan Sun', 'Ziqing Wang', 'Hao Cheng', 'Qiang Zhang', 'Shibo Zhou', 'Renjing Xu'] | ['cs.CV', 'cs.NE'] | Event-based cameras are attracting significant interest as they provide rich
edge information, high dynamic range, and high temporal resolution. Many
state-of-the-art event-based algorithms rely on splitting the events into fixed
groups, resulting in the omission of crucial temporal information, particularly
when deali... | 2024-10-03T06:41:10Z | Accepted to NeurIPS 2024 | null | null | Spiking Neural Network as Adaptive Event Stream Slicer | ['Jiahang Cao', 'Mingyuan Sun', 'Ziqing Wang', 'Haotai Cheng', 'Qiang Zhang', 'Shibo Zhou', 'Renjing Xu'] | 2,024 | Neural Information Processing Systems | 2 | 50 | ['Computer Science'] |
2,410.0225 | Probabilistic road classification in historical maps using synthetic
data and deep learning | ['Dominik J. Mühlematter', 'Sebastian Schweizer', 'Chenjing Jiao', 'Xue Xia', 'Magnus Heitzler', 'Lorenz Hurni'] | ['cs.CV', 'cs.LG'] | Historical maps are invaluable for analyzing long-term changes in
transportation and spatial development, offering a rich source of data for
evolutionary studies. However, digitizing and classifying road networks from
these maps is often expensive and time-consuming, limiting their widespread
use. Recent advancements i... | 2024-10-03T06:43:09Z | null | null | null | Probabilistic road classification in historical maps using synthetic data and deep learning | ['Dominik J. Mühlematter', 'Sebastian Schweizer', 'C. Jiao', 'Xue Xia', 'M. Heitzler', 'L. Hurni'] | 2,024 | arXiv.org | 0 | 71 | ['Computer Science'] |
2,410.02367 | SageAttention: Accurate 8-Bit Attention for Plug-and-play Inference
Acceleration | ['Jintao Zhang', 'Jia Wei', 'Haofeng Huang', 'Pengle Zhang', 'Jun Zhu', 'Jianfei Chen'] | ['cs.LG'] | The transformer architecture predominates across various models. As the heart
of the transformer, attention has a computational complexity of $O(N^2)$,
compared to $O(N)$ for linear transformations. When handling large sequence
lengths, attention becomes the primary time-consuming component. Although
quantization has p... | 2024-10-03T10:25:23Z | @inproceedings{zhang2025sageattention, title={SageAttention: Accurate
8-Bit Attention for Plug-and-play Inference Acceleration}, author={Zhang,
Jintao and Wei, Jia and Zhang, Pengle and Zhu, Jun and Chen, Jianfei},
booktitle={International Conference on Learning Representations (ICLR)},
year={2025} } | The Thirteenth International Conference on Learning
Representations (ICLR 2025) | null | SageAttention: Accurate 8-Bit Attention for Plug-and-play Inference Acceleration | ['Jintao Zhang', 'Jia Wei', 'Pengle Zhang', 'Jun Zhu', 'Jianfei Chen'] | 2,024 | International Conference on Learning Representations | 39 | 79 | ['Computer Science'] |
2,410.02381 | MetaMetrics: Calibrating Metrics For Generation Tasks Using Human
Preferences | ['Genta Indra Winata', 'David Anugraha', 'Lucky Susanto', 'Garry Kuwanto', 'Derry Tanti Wijaya'] | ['cs.CL', 'cs.AI', 'cs.CV', 'cs.LG'] | Understanding the quality of a performance evaluation metric is crucial for
ensuring that model outputs align with human preferences. However, it remains
unclear how well each metric captures the diverse aspects of these preferences,
as metrics often excel in one particular area but not across all dimensions. To
addres... | 2024-10-03T11:01:25Z | Accepted to ICLR 2025 | null | null | MetaMetrics: Calibrating Metrics For Generation Tasks Using Human Preferences | ['Genta Indra Winata', 'David Anugraha', 'Lucky Susanto', 'Garry Kuwanto', 'Derry Tanti Wijaya'] | 2,024 | International Conference on Learning Representations | 11 | 85 | ['Computer Science'] |
2,410.02416 | Eliminating Oversaturation and Artifacts of High Guidance Scales in
Diffusion Models | ['Seyedmorteza Sadat', 'Otmar Hilliges', 'Romann M. Weber'] | ['cs.LG', 'cs.CV'] | Classifier-free guidance (CFG) is crucial for improving both generation
quality and alignment between the input condition and final output in diffusion
models. While a high guidance scale is generally required to enhance these
aspects, it also causes oversaturation and unrealistic artifacts. In this
paper, we revisit t... | 2024-10-03T12:06:29Z | Published as a conference paper at ICLR 2025 | The Thirteenth International Conference on Learning
Representations (ICLR 2025) | null | null | null | null | null | null | null | null |
2,410.0244 | Optimizing Adaptive Attacks against Watermarks for Language Models | ['Abdulrahman Diaa', 'Toluwani Aremu', 'Nils Lukas'] | ['cs.CR', 'cs.AI'] | Large Language Models (LLMs) can be misused to spread unwanted content at
scale. Content watermarking deters misuse by hiding messages in content,
enabling its detection using a secret watermarking key. Robustness is a core
security property, stating that evading detection requires (significant)
degradation of the cont... | 2024-10-03T12:37:39Z | To appear at the International Conference on Machine Learning
(ICML'25) | null | null | null | null | null | null | null | null | null |
2,410.02503 | Mixed-Session Conversation with Egocentric Memory | ['Jihyoung Jang', 'Taeyoung Kim', 'Hyounghun Kim'] | ['cs.CL', 'cs.AI'] | Recently introduced dialogue systems have demonstrated high usability.
However, they still fall short of reflecting real-world conversation scenarios.
Current dialogue systems exhibit an inability to replicate the dynamic,
continuous, long-term interactions involving multiple partners. This shortfall
arises because the... | 2024-10-03T14:06:43Z | EMNLP Findings 2024 (30 pages); Project website:
https://mixed-session.github.io/ | null | null | null | null | null | null | null | null | null |
2,410.02525 | Contextual Document Embeddings | ['John X. Morris', 'Alexander M. Rush'] | ['cs.CL', 'cs.AI'] | Dense document embeddings are central to neural retrieval. The dominant
paradigm is to train and construct embeddings by running encoders directly on
individual documents. In this work, we argue that these embeddings, while
effective, are implicitly out-of-context for targeted use cases of retrieval,
and that a context... | 2024-10-03T14:33:34Z | null | null | null | Contextual Document Embeddings | ['John X. Morris', 'Alexander M. Rush'] | 2,024 | International Conference on Learning Representations | 9 | 58 | ['Computer Science'] |
2,410.02653 | Measuring and Improving Persuasiveness of Large Language Models | ['Somesh Singh', 'Yaman K Singla', 'Harini SI', 'Balaji Krishnamurthy'] | ['cs.CL', 'cs.CV'] | LLMs are increasingly being used in workflows involving generating content to
be consumed by humans (e.g., marketing) and also in directly interacting with
humans (e.g., through chatbots). The development of such systems that are
capable of generating verifiably persuasive messages presents both
opportunities and chall... | 2024-10-03T16:36:35Z | null | null | null | null | null | null | null | null | null | null |
2,410.0266 | How to Train Long-Context Language Models (Effectively) | ['Tianyu Gao', 'Alexander Wettig', 'Howard Yen', 'Danqi Chen'] | ['cs.CL', 'cs.LG'] | We study continued training and supervised fine-tuning (SFT) of a language
model (LM) to make effective use of long-context information. We first
establish a reliable evaluation protocol to guide model development -- instead
of perplexity or simple needle-in-a-haystack (NIAH) tests, we use a broad set
of long-context d... | 2024-10-03T16:46:52Z | Accepted to ACL 2025. Our code, data, and models are available at
https://github.com/princeton-nlp/ProLong | null | null | How to Train Long-Context Language Models (Effectively) | ['Tianyu Gao', 'Alexander Wettig', 'Howard Yen', 'Danqi Chen'] | 2,024 | arXiv.org | 48 | 104 | ['Computer Science'] |
2,410.02675 | FAN: Fourier Analysis Networks | ['Yihong Dong', 'Ge Li', 'Yongding Tao', 'Xue Jiang', 'Kechi Zhang', 'Jia Li', 'Jinliang Deng', 'Jing Su', 'Jun Zhang', 'Jingjing Xu'] | ['cs.LG', 'cs.AI', 'cs.CL'] | Despite the remarkable successes of general-purpose neural networks, such as
MLPs and Transformers, we find that they exhibit notable shortcomings in
modeling and reasoning about periodic phenomena, achieving only marginal
performance within the training domain and failing to generalize effectively to
out-of-domain (OO... | 2024-10-03T17:02:21Z | null | null | null | null | null | null | null | null | null | null |
2,410.02678 | Distilling an End-to-End Voice Assistant Without Instruction Training
Data | ['William Held', 'Ella Li', 'Michael Ryan', 'Weiyan Shi', 'Yanzhe Zhang', 'Diyi Yang'] | ['cs.CL', 'cs.AI'] | Voice assistants, such as Siri and Google Assistant, typically model audio
and text separately, resulting in lost speech information and increased
complexity. Recent efforts to address this with end-to-end Speech Large
Language Models (LLMs) trained with supervised finetuning (SFT)
have led to models ``forgetting" ca... | 2024-10-03T17:04:48Z | null | null | null | null | null | null | null | null | null | null |
2,410.02705 | ControlAR: Controllable Image Generation with Autoregressive Models | ['Zongming Li', 'Tianheng Cheng', 'Shoufa Chen', 'Peize Sun', 'Haocheng Shen', 'Longjin Ran', 'Xiaoxin Chen', 'Wenyu Liu', 'Xinggang Wang'] | ['cs.CV'] | Autoregressive (AR) models have reformulated image generation as next-token
prediction, demonstrating remarkable potential and emerging as strong
competitors to diffusion models. However, control-to-image generation, akin to
ControlNet, remains largely unexplored within AR models. Although a natural
approach, inspired ... | 2024-10-03T17:28:07Z | To appear in ICLR 2025 | null | null | null | null | null | null | null | null | null |
2,410.02712 | LLaVA-Critic: Learning to Evaluate Multimodal Models | ['Tianyi Xiong', 'Xiyao Wang', 'Dong Guo', 'Qinghao Ye', 'Haoqi Fan', 'Quanquan Gu', 'Heng Huang', 'Chunyuan Li'] | ['cs.CV', 'cs.CL'] | We introduce LLaVA-Critic, the first open-source large multimodal model (LMM)
designed as a generalist evaluator to assess performance across a wide range of
multimodal tasks. LLaVA-Critic is trained using a high-quality critic
instruction-following dataset that incorporates diverse evaluation criteria and
scenarios. O... | 2024-10-03T17:36:33Z | Accepted by CVPR 2025; Project Page:
https://llava-vl.github.io/blog/2024-10-03-llava-critic | null | null | null | null | null | null | null | null | null |
2,410.02713 | Video Instruction Tuning With Synthetic Data | ['Yuanhan Zhang', 'Jinming Wu', 'Wei Li', 'Bo Li', 'Zejun Ma', 'Ziwei Liu', 'Chunyuan Li'] | ['cs.CV', 'cs.CL'] | The development of video large multimodal models (LMMs) has been hindered by
the difficulty of curating large amounts of high-quality raw data from the web.
To address this, we propose an alternative approach by creating a high-quality
synthetic dataset specifically for video instruction-following, namely
LLaVA-Video-1... | 2024-10-03T17:36:49Z | Project page: https://llava-vl.github.io/blog/2024-09-30-llava-video/ | null | null | null | null | null | null | null | null | null |
2,410.02743 | MA-RLHF: Reinforcement Learning from Human Feedback with Macro Actions | ['Yekun Chai', 'Haoran Sun', 'Huang Fang', 'Shuohuan Wang', 'Yu Sun', 'Hua Wu'] | ['cs.CL'] | Reinforcement learning from human feedback (RLHF) has demonstrated
effectiveness in aligning large language models (LLMs) with human preferences.
However, token-level RLHF suffers from the credit assignment problem over long
sequences, where delayed rewards make it challenging for the model to discern
which actions con... | 2024-10-03T17:55:13Z | null | null | null | null | null | null | null | null | null | null |
2,410.02745 | AVG-LLaVA: A Large Multimodal Model with Adaptive Visual Granularity | ['Zhibin Lan', 'Liqiang Niu', 'Fandong Meng', 'Wenbo Li', 'Jie Zhou', 'Jinsong Su'] | ['cs.CV', 'cs.AI', 'cs.CL'] | Recently, when dealing with high-resolution images, dominant LMMs usually
divide them into multiple local images and one global image, which will lead to
a large number of visual tokens. In this work, we introduce AVG-LLaVA, an LMM
that can adaptively select the appropriate visual granularity based on the
input image a... | 2024-09-20T10:50:21Z | Preprint | null | null | null | null | null | null | null | null | null |
2,410.02749 | Training Language Models on Synthetic Edit Sequences Improves Code
Synthesis | ['Ulyana Piterbarg', 'Lerrel Pinto', 'Rob Fergus'] | ['cs.LG', 'cs.CL'] | Software engineers mainly write code by editing existing programs. In
contrast, language models (LMs) autoregressively synthesize programs in a
single pass. One explanation for this is the scarcity of sequential edit data.
While high-quality instruction data for code synthesis is scarce, edit data for
synthesis is even... | 2024-10-03T17:57:22Z | ICLR 2025 | null | null | null | null | null | null | null | null | null |
2,410.0276 | Erasing Conceptual Knowledge from Language Models | ['Rohit Gandikota', 'Sheridan Feucht', 'Samuel Marks', 'David Bau'] | ['cs.CL', 'cs.LG'] | In this work, we propose Erasure of Language Memory (ELM), an approach for
concept-level unlearning built on the principle of matching the distribution
defined by an introspective classifier. Our key insight is that effective
unlearning should leverage the model's ability to evaluate its own knowledge,
using the model ... | 2024-10-03T17:59:30Z | Project Page: https://elm.baulab.info | null | null | null | null | null | null | null | null | null |
2,410.02761 | FakeShield: Explainable Image Forgery Detection and Localization via
Multi-modal Large Language Models | ['Zhipei Xu', 'Xuanyu Zhang', 'Runyi Li', 'Zecheng Tang', 'Qing Huang', 'Jian Zhang'] | ['cs.CV', 'cs.AI'] | The rapid development of generative AI is a double-edged sword, which not
only facilitates content creation but also makes image manipulation easier and
more difficult to detect. Although current image forgery detection and
localization (IFDL) methods are generally effective, they tend to face two
challenges: \textbf{1... | 2024-10-03T17:59:34Z | Accepted by ICLR 2025 | null | null | null | null | null | null | null | null | null |
2,410.02884 | LLaMA-Berry: Pairwise Optimization for O1-like Olympiad-Level
Mathematical Reasoning | ['Di Zhang', 'Jianbo Wu', 'Jingdi Lei', 'Tong Che', 'Jiatong Li', 'Tong Xie', 'Xiaoshui Huang', 'Shufei Zhang', 'Marco Pavone', 'Yuqiang Li', 'Wanli Ouyang', 'Dongzhan Zhou'] | ['cs.AI', 'cs.CL'] | This paper presents an advanced mathematical problem-solving framework,
LLaMA-Berry, for enhancing the mathematical reasoning ability of Large Language
Models (LLMs). The framework combines Monte Carlo Tree Search (MCTS) with
iterative Self-Refine to optimize the reasoning path and utilizes a pairwise
reward model to e... | 2024-10-03T18:12:29Z | null | null | null | LLaMA-Berry: Pairwise Optimization for O1-like Olympiad-Level Mathematical Reasoning | ['Di Zhang', 'Jianbo Wu', 'Jingdi Lei', 'Tong Che', 'Jiatong Li', 'Tong Xie', 'Xiaoshui Huang', 'Shufei Zhang', 'Marco Pavone', 'Yuqiang Li', 'Wanli Ouyang', 'Dongzhan Zhou'] | 2,024 | arXiv.org | 61 | 67 | ['Computer Science'] |
2,410.02907 | NNetNav: Unsupervised Learning of Browser Agents Through Environment
Interaction in the Wild | ['Shikhar Murty', 'Hao Zhu', 'Dzmitry Bahdanau', 'Christopher D. Manning'] | ['cs.CL'] | We introduce NNetNav, a method for unsupervised interaction with websites
that generates synthetic demonstrations for training browser agents. Given any
website, NNetNav produces these demonstrations by retroactively labeling action
sequences from an exploration policy. Most work on training browser agents has
relied o... | 2024-10-03T18:56:51Z | Code, Data and Models available at https://www.nnetnav.dev | null | null | null | null | null | null | null | null | null |
2,410.03051 | AuroraCap: Efficient, Performant Video Detailed Captioning and a New
Benchmark | ['Wenhao Chai', 'Enxin Song', 'Yilun Du', 'Chenlin Meng', 'Vashisht Madhavan', 'Omer Bar-Tal', 'Jenq-Neng Hwang', 'Saining Xie', 'Christopher D. Manning'] | ['cs.CV'] | Video detailed captioning is a key task which aims to generate comprehensive
and coherent textual descriptions of video content, benefiting both video
understanding and generation. In this paper, we propose AuroraCap, a video
captioner based on a large multimodal model. We follow the simplest
architecture design withou... | 2024-10-04T00:13:54Z | Accepted to ICLR 2025. Code, docs, weight, benchmark and training
data are all avaliable at https://rese1f.github.io/aurora-web/ | null | null | null | null | null | null | null | null | null |
2,410.03075 | Multilingual Topic Classification in X: Dataset and Analysis | ['Dimosthenis Antypas', 'Asahi Ushio', 'Francesco Barbieri', 'Jose Camacho-Collados'] | ['cs.CL'] | In the dynamic realm of social media, diverse topics are discussed daily,
transcending linguistic boundaries. However, the complexities of understanding
and categorising this content across various languages remain an important
challenge with traditional techniques like topic modelling often struggling to
accommodate t... | 2024-10-04T01:37:26Z | Accepted at EMNLP 2024 | null | null | null | null | null | null | null | null | null |
2,410.03115 | X-ALMA: Plug & Play Modules and Adaptive Rejection for Quality
Translation at Scale | ['Haoran Xu', 'Kenton Murray', 'Philipp Koehn', 'Hieu Hoang', 'Akiko Eriguchi', 'Huda Khayrallah'] | ['cs.CL'] | Large language models (LLMs) have achieved remarkable success across various
NLP tasks with a focus on English due to English-centric pre-training and
limited multilingual data. In this work, we focus on the problem of
translation, and while some multilingual LLMs claim to support for hundreds of
languages, models ofte... | 2024-10-04T03:17:27Z | Published as a conference paper at ICLR 2025 (spotlight) | null | null | X-ALMA: Plug & Play Modules and Adaptive Rejection for Quality Translation at Scale | ['Haoran Xu', 'Kenton Murray', 'Philipp Koehn', 'Hieu D. Hoang', 'Akiko Eriguchi', 'Huda Khayrallah'] | 2,024 | International Conference on Learning Representations | 15 | 71 | ['Computer Science'] |
2,410.0316 | Redefining Temporal Modeling in Video Diffusion: The Vectorized Timestep
Approach | ['Yaofang Liu', 'Yumeng Ren', 'Xiaodong Cun', 'Aitor Artola', 'Yang Liu', 'Tieyong Zeng', 'Raymond H. Chan', 'Jean-michel Morel'] | ['cs.CV', 'cs.LG'] | Diffusion models have revolutionized image generation, and their extension to
video generation has shown promise. However, current video diffusion
models~(VDMs) rely on a scalar timestep variable applied at the clip level,
which limits their ability to model complex temporal dependencies needed for
various tasks like i... | 2024-10-04T05:47:39Z | Code at https://github.com/Yaofang-Liu/FVDM | null | null | null | null | null | null | null | null | null |
2,410.0324 | Beyond Film Subtitles: Is YouTube the Best Approximation of Spoken
Vocabulary? | ['Adam Nohejl', 'Frederikus Hudi', 'Eunike Andriani Kardinata', 'Shintaro Ozaki', 'Maria Angelica Riera Machin', 'Hongyu Sun', 'Justin Vasselli', 'Taro Watanabe'] | ['cs.CL'] | Word frequency is a key variable in psycholinguistics, useful for modeling
human familiarity with words even in the era of large language models (LLMs).
Frequency in film subtitles has proved to be a particularly good approximation
of everyday language exposure. For many languages, however, film subtitles are
not easil... | 2024-10-04T09:04:20Z | Accepted to COLING 2025. 9 pages, 3 figures | null | null | null | null | null | null | null | null | null |
2,410.0329 | Grounded-VideoLLM: Sharpening Fine-grained Temporal Grounding in Video
Large Language Models | ['Haibo Wang', 'Zhiyang Xu', 'Yu Cheng', 'Shizhe Diao', 'Yufan Zhou', 'Yixin Cao', 'Qifan Wang', 'Weifeng Ge', 'Lifu Huang'] | ['cs.CV', 'cs.AI'] | Video Large Language Models (Video-LLMs) have demonstrated remarkable
capabilities in coarse-grained video understanding, however, they struggle with
fine-grained temporal grounding. In this paper, we introduce Grounded-VideoLLM,
a novel Video-LLM adept at perceiving and reasoning over specific video moments
in a fine-... | 2024-10-04T10:04:37Z | null | null | null | Grounded-VideoLLM: Sharpening Fine-grained Temporal Grounding in Video Large Language Models | ['Haibo Wang', 'Zhiyang Xu', 'Yu Cheng', 'Shizhe Diao', 'Yufan Zhou', 'Yixin Cao', 'Qifan Wang', 'Weifeng Ge', 'Lifu Huang'] | 2,024 | arXiv.org | 26 | 58 | ['Computer Science'] |
2,410.03355 | LANTERN: Accelerating Visual Autoregressive Models with Relaxed
Speculative Decoding | ['Doohyuk Jang', 'Sihwan Park', 'June Yong Yang', 'Yeonsung Jung', 'Jihun Yun', 'Souvik Kundu', 'Sung-Yub Kim', 'Eunho Yang'] | ['cs.CV', 'cs.AI'] | Auto-Regressive (AR) models have recently gained prominence in image
generation, often matching or even surpassing the performance of diffusion
models. However, one major limitation of AR models is their sequential nature,
which processes tokens one at a time, slowing down generation compared to
models like GANs or dif... | 2024-10-04T12:21:03Z | 30 pages, 13 figures, Accepted to ICLR 2025 (poster) | null | null | null | null | null | null | null | null | null |
2,410.03524 | Steering Large Language Models between Code Execution and Textual
Reasoning | ['Yongchao Chen', 'Harsh Jhamtani', 'Srinagesh Sharma', 'Chuchu Fan', 'Chi Wang'] | ['cs.CL'] | While a lot of recent research focuses on enhancing the textual reasoning
capabilities of Large Language Models (LLMs) by optimizing the multi-agent
framework or reasoning chains, several benchmark tasks can be solved with 100\%
success through direct coding, which is more scalable and avoids the
computational overhead... | 2024-10-04T15:44:47Z | 32 pages, 12 figures, 12 tables | The Thirteenth International Conference on Learning
Representations (ICLR'2025) | null | null | null | null | null | null | null | null |
2,410.03553 | Structure-Enhanced Protein Instruction Tuning: Towards General-Purpose
Protein Understanding with LLMs | ['Wei Wu', 'Chao Wang', 'Liyi Chen', 'Mingze Yin', 'Yiheng Zhu', 'Kun Fu', 'Jieping Ye', 'Hui Xiong', 'Zheng Wang'] | ['cs.CL', 'q-bio.BM'] | Proteins, as essential biomolecules, play a central role in biological
processes, including metabolic reactions and DNA replication. Accurate
prediction of their properties and functions is crucial in biological
applications. Recent development of protein language models (pLMs) with
supervised fine tuning provides a pr... | 2024-10-04T16:02:50Z | Accepted by KDD2025 | null | 10.1145/3711896.3737138 | Structure-Enhanced Protein Instruction Tuning: Towards General-Purpose Protein Understanding | ['Wei Wu', 'Chao Wang', 'Liyi Chen', 'Mingze Yin', 'Yiheng Zhu', 'Kun Fu', 'Jieping Ye', 'Hui Xiong', 'Zheng Wang'] | 2,024 | arXiv.org | 1 | 116 | ['Computer Science', 'Biology'] |
2,410.03617 | What Matters for Model Merging at Scale? | ['Prateek Yadav', 'Tu Vu', 'Jonathan Lai', 'Alexandra Chronopoulou', 'Manaal Faruqui', 'Mohit Bansal', 'Tsendsuren Munkhdalai'] | ['cs.LG', 'cs.AI', 'cs.CL'] | Model merging aims to combine multiple expert models into a more capable
single model, offering benefits such as reduced storage and serving costs,
improved generalization, and support for decentralized model development.
Despite its promise, previous studies have primarily focused on merging a few
small models. This l... | 2024-10-04T17:17:19Z | 20 Pages, 7 Figures, 4 Tables | null | null | null | null | null | null | null | null | null |
2,410.0373 | Teuken-7B-Base & Teuken-7B-Instruct: Towards European LLMs | ['Mehdi Ali', 'Michael Fromm', 'Klaudia Thellmann', 'Jan Ebert', 'Alexander Arno Weber', 'Richard Rutmann', 'Charvi Jain', 'Max Lübbering', 'Daniel Steinigen', 'Johannes Leveling', 'Katrin Klug', 'Jasper Schulze Buschhoff', 'Lena Jurkschat', 'Hammam Abdelwahab', 'Benny Jörg Stein', 'Karl-Heinz Sylla', 'Pavel Denisov', ... | ['cs.CL', 'cs.AI', 'cs.LG'] | We present two multilingual LLMs designed to embrace Europe's linguistic
diversity by supporting all 24 official languages of the European Union.
Trained on a dataset comprising around 60% non-English data and utilizing a
custom multilingual tokenizer, our models address the limitations of existing
LLMs that predominan... | 2024-09-30T16:05:38Z | null | null | null | null | null | null | null | null | null | null |
2,410.03742 | Beyond Scalar Reward Model: Learning Generative Judge from Preference
Data | ['Ziyi Ye', 'Xiangsheng Li', 'Qiuchi Li', 'Qingyao Ai', 'Yujia Zhou', 'Wei Shen', 'Dong Yan', 'Yiqun Liu'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Learning from preference feedback is a common practice for aligning large
language models~(LLMs) with human value. Conventionally, preference data is
learned and encoded into a scalar reward model that connects a value head with
an LLM to produce a scalar score as preference or reward. However, scalar
models lack inter... | 2024-10-01T07:38:58Z | null | null | null | null | null | null | null | null | null | null |
2,410.0375 | SQFT: Low-cost Model Adaptation in Low-precision Sparse Foundation
Models | ['Juan Pablo Muñoz', 'Jinjie Yuan', 'Nilesh Jain'] | ['cs.LG', 'cs.AI', 'cs.CL'] | Large pre-trained models (LPMs), such as large language models, have become
ubiquitous and are employed in many applications. These models are often
adapted to a desired domain or downstream task through a fine-tuning stage.
This paper proposes SQFT, an end-to-end solution for low-precision sparse
parameter-efficient f... | 2024-10-01T19:49:35Z | To be published in EMNLP-24 Findings | null | null | SQFT: Low-cost Model Adaptation in Low-precision Sparse Foundation Models | ['J. P. Munoz', 'Jinjie Yuan', 'Nilesh Jain'] | 2,024 | Conference on Empirical Methods in Natural Language Processing | 3 | 28 | ['Computer Science'] |
2,410.03804 | Mixture of Attentions For Speculative Decoding | ['Matthieu Zimmer', 'Milan Gritta', 'Gerasimos Lampouras', 'Haitham Bou Ammar', 'Jun Wang'] | ['cs.CL', 'cs.AI', 'cs.LG'] | The growth in the number of parameters of Large Language Models (LLMs) has
led to a significant surge in computational requirements, making them
challenging and costly to deploy. Speculative decoding (SD) leverages smaller
models to efficiently propose future tokens, which are then verified by the LLM
in parallel. Smal... | 2024-10-04T10:25:52Z | Accepted at International Conference on Learning Representations
(ICLR 2025) | null | null | Mixture of Attentions For Speculative Decoding | ['Matthieu Zimmer', 'Milan Gritta', 'Gerasimos Lampouras', 'Haitham Bou-Ammar', 'Jun Wang'] | 2,024 | International Conference on Learning Representations | 6 | 41 | ['Computer Science'] |
2,410.03825 | MonST3R: A Simple Approach for Estimating Geometry in the Presence of
Motion | ['Junyi Zhang', 'Charles Herrmann', 'Junhwa Hur', 'Varun Jampani', 'Trevor Darrell', 'Forrester Cole', 'Deqing Sun', 'Ming-Hsuan Yang'] | ['cs.CV'] | Estimating geometry from dynamic scenes, where objects move and deform over
time, remains a core challenge in computer vision. Current approaches often
rely on multi-stage pipelines or global optimizations that decompose the
problem into subtasks, like depth and flow, leading to complex systems prone to
errors. In this... | 2024-10-04T18:00:07Z | Accepted by ICLR 25, Project page: https://monst3r-project.github.io/ | null | null | MonST3R: A Simple Approach for Estimating Geometry in the Presence of Motion | ['Junyi Zhang', 'Charles Herrmann', 'Junhwa Hur', 'Varun Jampani', 'Trevor Darrell', 'Forrester Cole', 'Deqing Sun', 'Ming-Hsuan Yang'] | 2,024 | International Conference on Learning Representations | 96 | 77 | ['Computer Science'] |
2,410.0393 | Reverb: Open-Source ASR and Diarization from Rev | ['Nishchal Bhandari', 'Danny Chen', 'Miguel Ángel del Río Fernández', 'Natalie Delworth', 'Jennifer Drexler Fox', 'Migüel Jetté', 'Quinten McNamara', 'Corey Miller', 'Ondřej Novotný', 'Ján Profant', 'Nan Qin', 'Martin Ratajczak', 'Jean-Philippe Robichaud'] | ['cs.CL', 'cs.SD', 'eess.AS'] | Today, we are open-sourcing our core speech recognition and diarization
models for non-commercial use. We are releasing both a full production pipeline
for developers as well as pared-down research models for experimentation. Rev
hopes that these releases will spur research and innovation in the fast-moving
domain of v... | 2024-10-04T21:13:58Z | null | null | null | null | null | null | null | null | null | null |
2,410.0396 | SwiftKV: Fast Prefill-Optimized Inference with Knowledge-Preserving
Model Transformation | ['Aurick Qiao', 'Zhewei Yao', 'Samyam Rajbhandari', 'Yuxiong He'] | ['cs.LG', 'cs.AI', 'cs.CL'] | LLM inference for enterprise applications, such as summarization, RAG, and
code-generation, typically observe much longer prompt than generations, leading
to high prefill cost and response latency. We present SwiftKV, a novel model
transformation and distillation procedure targeted at reducing the prefill
compute (in F... | 2024-10-04T22:45:26Z | null | null | null | null | null | null | null | null | null | null |
2,410.04133 | An Electrocardiogram Foundation Model Built on over 10 Million
Recordings with External Evaluation across Multiple Domains | ['Jun Li', 'Aaron Aguirre', 'Junior Moura', 'Che Liu', 'Lanhai Zhong', 'Chenxi Sun', 'Gari Clifford', 'Brandon Westover', 'Shenda Hong'] | ['cs.LG', 'cs.AI', 'eess.SP'] | Artificial intelligence (AI) has demonstrated significant potential in ECG
analysis and cardiovascular disease assessment. Recently, foundation models
have played a remarkable role in advancing medical AI. The development of an
ECG foundation model holds the promise of elevating AI-ECG research to new
heights. However,... | 2024-10-05T12:12:02Z | Code: https://github.com/PKUDigitalHealth/ECGFounder | null | null | null | null | null | null | null | null | null |
2,410.04223 | Multimodal Large Language Models for Inverse Molecular Design with
Retrosynthetic Planning | ['Gang Liu', 'Michael Sun', 'Wojciech Matusik', 'Meng Jiang', 'Jie Chen'] | ['cs.LG', 'physics.chem-ph', 'q-bio.BM'] | While large language models (LLMs) have integrated images, adapting them to
graphs remains challenging, limiting their applications in materials and drug
design. This difficulty stems from the need for coherent autoregressive
generation across texts and graphs. To address this, we introduce Llamole, the
first multimoda... | 2024-10-05T16:35:32Z | 27 pages, 11 figures, 4 tables | null | null | Multimodal Large Language Models for Inverse Molecular Design with Retrosynthetic Planning | ['Gang Liu', 'Michael Sun', 'Wojciech Matusik', 'Meng Jiang', 'Jie Chen'] | 2,024 | International Conference on Learning Representations | 9 | 47 | ['Computer Science', 'Physics', 'Biology'] |
2,410.04269 | RoQLlama: A Lightweight Romanian Adapted Language Model | ['George-Andrei Dima', 'Andrei-Marius Avram', 'Cristian-George Crăciun', 'Dumitru-Clementin Cercel'] | ['cs.CL'] | The remarkable achievements obtained by open-source large language models
(LLMs) in recent years have predominantly been concentrated on tasks involving
the English language. In this paper, we aim to advance the performance of
Llama2 models on Romanian tasks. We tackle the problem of reduced computing
resources by usin... | 2024-10-05T19:14:11Z | Accepted at EMNLP Findings 2024 (short papers) | null | null | null | null | null | null | null | null | null |
2,410.04415 | Geometric Analysis of Reasoning Trajectories: A Phase Space Approach to
Understanding Valid and Invalid Multi-Hop Reasoning in LLMs | ['Javier Marin'] | ['cs.AI', 'cs.LG'] | This paper proposes a novel approach to analyzing multi-hop reasoning in
language models through Hamiltonian mechanics. We map reasoning chains in
embedding spaces to Hamiltonian systems, defining a function that balances
reasoning progression (kinetic energy) against question relevance (potential
energy). Analyzing re... | 2024-10-06T09:09:14Z | null | null | null | null | null | null | null | null | null | null |
2,410.04456 | SWEb: A Large Web Dataset for the Scandinavian Languages | ['Tobias Norlund', 'Tim Isbister', 'Amaru Cuba Gyllensten', 'Paul Dos Santos', 'Danila Petrelli', 'Ariel Ekgren', 'Magnus Sahlgren'] | ['cs.CL'] | This paper presents the hitherto largest pretraining dataset for the
Scandinavian languages: the Scandinavian WEb (SWEb), comprising over one
trillion tokens. The paper details the collection and processing pipeline, and
introduces a novel model-based text extractor that significantly reduces
complexity in comparison w... | 2024-10-06T11:55:15Z | null | null | null | SWEb: A Large Web Dataset for the Scandinavian Languages | ['Tobias Norlund', 'T. Isbister', 'Amaru Cuba Gyllensten', 'Paul Gabriel dos Santos', 'Danila Petrelli', 'Ariel Ekgren', 'Magnus Sahlgren'] | 2,024 | arXiv.org | 0 | 24 | ['Computer Science'] |
2,410.04587 | Hammer: Robust Function-Calling for On-Device Language Models via
Function Masking | ['Qiqiang Lin', 'Muning Wen', 'Qiuying Peng', 'Guanyu Nie', 'Junwei Liao', 'Jun Wang', 'Xiaoyun Mo', 'Jiamu Zhou', 'Cheng Cheng', 'Yin Zhao', 'Jun Wang', 'Weinan Zhang'] | ['cs.LG', 'cs.AI', 'cs.SE'] | Large language models have demonstrated impressive value in performing as
autonomous agents when equipped with external tools and API calls. Nonetheless,
effectively harnessing their potential for executing complex tasks crucially
relies on enhancements in their function calling capabilities. This paper
identifies a cr... | 2024-10-06T18:57:46Z | null | null | null | Hammer: Robust Function-Calling for On-Device Language Models via Function Masking | ['Qiqiang Lin', 'Muning Wen', 'Qiuying Peng', 'Guanyu Nie', 'Junwei Liao', 'Jun Wang', 'Xiaoyun Mo', 'Jiamu Zhou', 'Cheng Cheng', 'Yin Zhao', 'Weinan Zhang'] | 2,024 | arXiv.org | 21 | 27 | ['Computer Science'] |
2,410.04612 | Regressing the Relative Future: Efficient Policy Optimization for
Multi-turn RLHF | ['Zhaolin Gao', 'Wenhao Zhan', 'Jonathan D. Chang', 'Gokul Swamy', 'Kianté Brantley', 'Jason D. Lee', 'Wen Sun'] | ['cs.LG', 'cs.AI', 'cs.CL'] | Large Language Models (LLMs) have achieved remarkable success at tasks like
summarization that involve a single turn of interaction. However, they can
still struggle with multi-turn tasks like dialogue that require long-term
planning. Previous works on multi-turn dialogue extend single-turn
reinforcement learning from ... | 2024-10-06T20:20:22Z | null | null | null | null | null | null | null | null | null | null |
2,410.04803 | Timer-XL: Long-Context Transformers for Unified Time Series Forecasting | ['Yong Liu', 'Guo Qin', 'Xiangdong Huang', 'Jianmin Wang', 'Mingsheng Long'] | ['cs.LG', 'stat.ML'] | We present Timer-XL, a causal Transformer for unified time series
forecasting. To uniformly predict multidimensional time series, we generalize
next token prediction, predominantly adopted for 1D token sequences, to
multivariate next token prediction. The paradigm formulates various forecasting
tasks as a long-context ... | 2024-10-07T07:27:39Z | null | null | null | null | null | null | null | null | null | null |
2,410.04932 | OmniBooth: Learning Latent Control for Image Synthesis with Multi-modal
Instruction | ['Leheng Li', 'Weichao Qiu', 'Xu Yan', 'Jing He', 'Kaiqiang Zhou', 'Yingjie Cai', 'Qing Lian', 'Bingbing Liu', 'Ying-Cong Chen'] | ['cs.CV'] | We present OmniBooth, an image generation framework that enables spatial
control with instance-level multi-modal customization. For all instances, the
multimodal instruction can be described through text prompts or image
references. Given a set of user-defined masks and associated text or image
guidance, our objective ... | 2024-10-07T11:26:13Z | null | null | null | OmniBooth: Learning Latent Control for Image Synthesis with Multi-modal Instruction | ['Leheng Li', 'Weichao Qiu', 'Xu Yan', 'Jing He', 'Kaiqiang Zhou', 'Yingjie Cai', 'Qing Lian', 'Bingbing Liu', 'Ying-Cong Chen'] | 2,024 | arXiv.org | 1 | 42 | ['Computer Science'] |
2,410.05077 | ZEBRA: Zero-Shot Example-Based Retrieval Augmentation for Commonsense
Question Answering | ['Francesco Maria Molfese', 'Simone Conia', 'Riccardo Orlando', 'Roberto Navigli'] | ['cs.CL'] | Current Large Language Models (LLMs) have shown strong reasoning capabilities
in commonsense question answering benchmarks, but the process underlying their
success remains largely opaque. As a consequence, recent approaches have
equipped LLMs with mechanisms for knowledge retrieval, reasoning and
introspection, not on... | 2024-10-07T14:31:43Z | Accepted at EMNLP 2024 Main Conference | null | null | null | null | null | null | null | null | null |
2,410.0516 | VLM2Vec: Training Vision-Language Models for Massive Multimodal
Embedding Tasks | ['Ziyan Jiang', 'Rui Meng', 'Xinyi Yang', 'Semih Yavuz', 'Yingbo Zhou', 'Wenhu Chen'] | ['cs.CV', 'cs.AI', 'cs.CL'] | Embedding models have been crucial in enabling various downstream tasks such
as semantic similarity, information retrieval, and clustering. Recently, there
has been a surge of interest in developing universal text embedding models that
can generalize across tasks (e.g., MTEB). However, progress in learning
universal mu... | 2024-10-07T16:14:05Z | Technical Report | null | null | VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks | ['Ziyan Jiang', 'Rui Meng', 'Xinyi Yang', 'Semih Yavuz', 'Yingbo Zhou', 'Wenhu Chen'] | 2,024 | International Conference on Learning Representations | 29 | 81 | ['Computer Science'] |
2,410.05192 | Understanding Warmup-Stable-Decay Learning Rates: A River Valley Loss
Landscape Perspective | ['Kaiyue Wen', 'Zhiyuan Li', 'Jason Wang', 'David Hall', 'Percy Liang', 'Tengyu Ma'] | ['cs.LG', 'cs.CL', 'stat.ML'] | Training language models currently requires pre-determining a fixed compute
budget because the typical cosine learning rate schedule depends on the total
number of steps. In contrast, the Warmup-Stable-Decay (WSD) schedule uses a
constant learning rate to produce a main branch of iterates that can in
principle continue... | 2024-10-07T16:49:39Z | 45 pages,13 figures | null | null | null | null | null | null | null | null | null |
2,410.0521 | Preserving Multi-Modal Capabilities of Pre-trained VLMs for Improving
Vision-Linguistic Compositionality | ['Youngtaek Oh', 'Jae Won Cho', 'Dong-Jin Kim', 'In So Kweon', 'Junmo Kim'] | ['cs.CV', 'cs.AI', 'cs.CL'] | In this paper, we propose a new method to enhance compositional understanding
in pre-trained vision and language models (VLMs) without sacrificing
performance in zero-shot multi-modal tasks. Traditional fine-tuning approaches
often improve compositional reasoning at the cost of degrading multi-modal
capabilities, prima... | 2024-10-07T17:16:20Z | EMNLP 2024 (Long, Main). Project page:
https://ytaek-oh.github.io/fsc-clip | null | null | null | null | null | null | null | null | null |
2,410.05243 | Navigating the Digital World as Humans Do: Universal Visual Grounding
for GUI Agents | ['Boyu Gou', 'Ruohan Wang', 'Boyuan Zheng', 'Yanan Xie', 'Cheng Chang', 'Yiheng Shu', 'Huan Sun', 'Yu Su'] | ['cs.AI', 'cs.CL', 'cs.CV'] | Multimodal large language models (MLLMs) are transforming the capabilities of
graphical user interface (GUI) agents, facilitating their transition from
controlled simulations to complex, real-world applications across various
platforms. However, the effectiveness of these agents hinges on the robustness
of their ground... | 2024-10-07T17:47:50Z | Accepted to ICLR 2025 (Oral). Project Homepage:
https://osu-nlp-group.github.io/UGround/ | null | null | Navigating the Digital World as Humans Do: Universal Visual Grounding for GUI Agents | ['Boyu Gou', 'Ruohan Wang', 'Boyuan Zheng', 'Yanan Xie', 'Cheng Chang', 'Yiheng Shu', 'Huan Sun', 'Yu Su'] | 2,024 | International Conference on Learning Representations | 96 | 70 | ['Computer Science'] |
2,410.05255 | Bridging SFT and DPO for Diffusion Model Alignment with Self-Sampling
Preference Optimization | ['Daoan Zhang', 'Guangchen Lan', 'Dong-Jun Han', 'Wenlin Yao', 'Xiaoman Pan', 'Hongming Zhang', 'Mingxiao Li', 'Pengcheng Chen', 'Yu Dong', 'Christopher Brinton', 'Jiebo Luo'] | ['cs.CV', 'cs.LG', 'I.2.6; I.2.10; I.4.0; I.5.0'] | Existing post-training techniques are broadly categorized into supervised
fine-tuning (SFT) and reinforcement learning (RL) methods; the former is stable
during training but suffers from limited generalization, while the latter,
despite its stronger generalization capability, relies on additional preference
data or rew... | 2024-10-07T17:56:53Z | null | null | null | SePPO: Semi-Policy Preference Optimization for Diffusion Alignment | ['Daoan Zhang', 'Guangchen Lan', 'Dong-Jun Han', 'Wenlin Yao', 'Xiaoman Pan', 'Hongming Zhang', 'Mingxiao Li', 'Pengcheng Chen', 'Yu Dong', 'Christopher G. Brinton', 'Jiebo Luo'] | 2,024 | arXiv.org | 6 | 36 | ['Computer Science'] |
2,410.05258 | Differential Transformer | ['Tianzhu Ye', 'Li Dong', 'Yuqing Xia', 'Yutao Sun', 'Yi Zhu', 'Gao Huang', 'Furu Wei'] | ['cs.CL', 'cs.LG'] | Transformer tends to overallocate attention to irrelevant context. In this
work, we introduce Diff Transformer, which amplifies attention to the relevant
context while canceling noise. Specifically, the differential attention
mechanism calculates attention scores as the difference between two separate
softmax attention... | 2024-10-07T17:57:38Z | Accepted as an Oral Presentation at ICLR 2025 | null | null | null | null | null | null | null | null | null |
2,410.05346 | AnyAttack: Towards Large-scale Self-supervised Adversarial Attacks on
Vision-language Models | ['Jiaming Zhang', 'Junhong Ye', 'Xingjun Ma', 'Yige Li', 'Yunfan Yang', 'Yunhao Chen', 'Jitao Sang', 'Dit-Yan Yeung'] | ['cs.LG', 'cs.AI'] | Due to their multimodal capabilities, Vision-Language Models (VLMs) have
found numerous impactful applications in real-world scenarios. However, recent
studies have revealed that VLMs are vulnerable to image-based adversarial
attacks. Traditional targeted adversarial attacks require specific targets and
labels, limitin... | 2024-10-07T09:45:18Z | CVPR 2025 | null | null | AnyAttack: Towards Large-scale Self-supervised Adversarial Attacks on Vision-language Models | ['Jiaming Zhang', 'Junhong Ye', 'Xingjun Ma', 'Yige Li', 'Yunfan Yang', 'Jitao Sang', 'Dit-Yan Yeung'] | 2,024 | null | 0 | 35 | ['Computer Science'] |
2,410.05355 | Falcon Mamba: The First Competitive Attention-free 7B Language Model | ['Jingwei Zuo', 'Maksim Velikanov', 'Dhia Eddine Rhaiem', 'Ilyas Chahed', 'Younes Belkada', 'Guillaume Kunsch', 'Hakim Hacid'] | ['cs.CL', 'cs.AI'] | In this technical report, we present Falcon Mamba 7B, a new base large
language model based on the novel Mamba architecture. Falcon Mamba 7B is
trained on 5.8 trillion tokens with carefully selected data mixtures. As a pure
Mamba-based model, Falcon Mamba 7B surpasses leading open-weight models based
on Transformers, s... | 2024-10-07T15:40:45Z | null | null | null | null | null | null | null | null | null | null |
2,410.05363 | Towards World Simulator: Crafting Physical Commonsense-Based Benchmark
for Video Generation | ['Fanqing Meng', 'Jiaqi Liao', 'Xinyu Tan', 'Wenqi Shao', 'Quanfeng Lu', 'Kaipeng Zhang', 'Yu Cheng', 'Dianqi Li', 'Yu Qiao', 'Ping Luo'] | ['cs.CV'] | Text-to-video (T2V) models like Sora have made significant strides in
visualizing complex prompts, which is increasingly viewed as a promising path
towards constructing the universal world simulator. Cognitive psychologists
believe that the foundation for achieving this goal is the ability to
understand intuitive physi... | 2024-10-07T17:56:04Z | Project Page: https://phygenbench123.github.io/ | null | null | Towards World Simulator: Crafting Physical Commonsense-Based Benchmark for Video Generation | ['Fanqing Meng', 'Jiaqi Liao', 'Xinyu Tan', 'Wenqi Shao', 'Quanfeng Lu', 'Kaipeng Zhang', 'Yu Cheng', 'Dianqi Li', 'Yu Qiao', 'Ping Luo'] | 2,024 | arXiv.org | 27 | 38 | ['Computer Science'] |
2,410.0547 | Image Watermarks are Removable Using Controllable Regeneration from
Clean Noise | ['Yepeng Liu', 'Yiren Song', 'Hai Ci', 'Yu Zhang', 'Haofan Wang', 'Mike Zheng Shou', 'Yuheng Bu'] | ['cs.CR', 'cs.AI', 'cs.CV'] | Image watermark techniques provide an effective way to assert ownership,
deter misuse, and trace content sources, which has become increasingly
essential in the era of large generative models. A critical attribute of
watermark techniques is their robustness against various manipulations. In this
paper, we introduce a w... | 2024-10-07T20:04:29Z | ICLR2025 | null | null | null | null | null | null | null | null | null |
2,410.05472 | Neural machine translation system for Lezgian, Russian and Azerbaijani
languages | ['Alidar Asvarov', 'Andrey Grabovoy'] | ['cs.CL'] | We release the first neural machine translation system for translation
between Russian, Azerbaijani and the endangered Lezgian languages, as well as
monolingual and parallel datasets collected and aligned for training and
evaluating the system. Multiple experiments are conducted to identify how
different sets of traini... | 2024-10-07T20:08:10Z | null | null | 10.1109/ISPRAS64596.2024.10899143 | null | null | null | null | null | null | null |
2,410.05474 | R-Bench: Are your Large Multimodal Model Robust to Real-world
Corruptions? | ['Chunyi Li', 'Jianbo Zhang', 'Zicheng Zhang', 'Haoning Wu', 'Yuan Tian', 'Wei Sun', 'Guo Lu', 'Xiaohong Liu', 'Xiongkuo Min', 'Weisi Lin', 'Guangtao Zhai'] | ['cs.CV', 'cs.MM', 'eess.IV'] | The outstanding performance of Large Multimodal Models (LMMs) has made them
widely applied in vision-related tasks. However, various corruptions in the
real world mean that images will not be as ideal as in simulations, presenting
significant challenges for the practical application of LMMs. To address this
issue, we i... | 2024-10-07T20:12:08Z | null | null | null | null | null | null | null | null | null | null |
2,410.0561 | Structural Reasoning Improves Molecular Understanding of LLM | ['Yunhui Jang', 'Jaehyung Kim', 'Sungsoo Ahn'] | ['cs.LG', 'cs.AI'] | Recently, large language models (LLMs) have shown significant progress,
approaching human perception levels. In this work, we demonstrate that despite
these advances, LLMs still struggle to reason using molecular structural
information. This gap is critical because many molecular properties, including
functional groups... | 2024-10-08T01:49:48Z | null | null | null | null | null | null | null | null | null | null |
2,410.05643 | TRACE: Temporal Grounding Video LLM via Causal Event Modeling | ['Yongxin Guo', 'Jingyu Liu', 'Mingda Li', 'Qingbin Liu', 'Xi Chen', 'Xiaoying Tang'] | ['cs.CV'] | Video Temporal Grounding (VTG) is a crucial capability for video
understanding models and plays a vital role in downstream tasks such as video
browsing and editing. To effectively handle various tasks simultaneously and
enable zero-shot prediction, there is a growing trend in employing video LLMs
for VTG tasks. However... | 2024-10-08T02:46:30Z | ICLR 2025 | null | null | null | null | null | null | null | null | null |
2,410.05677 | T2V-Turbo-v2: Enhancing Video Generation Model Post-Training through
Data, Reward, and Conditional Guidance Design | ['Jiachen Li', 'Qian Long', 'Jian Zheng', 'Xiaofeng Gao', 'Robinson Piramuthu', 'Wenhu Chen', 'William Yang Wang'] | ['cs.CV', 'cs.AI'] | In this paper, we focus on enhancing a diffusion-based text-to-video (T2V)
model during the post-training phase by distilling a highly capable consistency
model from a pretrained T2V model. Our proposed method, T2V-Turbo-v2,
introduces a significant advancement by integrating various supervision
signals, including high... | 2024-10-08T04:30:06Z | Project Page: https://t2v-turbo-v2.github.io/ | null | null | null | null | null | null | null | null | null |
2,410.05954 | Pyramidal Flow Matching for Efficient Video Generative Modeling | ['Yang Jin', 'Zhicheng Sun', 'Ningyuan Li', 'Kun Xu', 'Kun Xu', 'Hao Jiang', 'Nan Zhuang', 'Quzhe Huang', 'Yang Song', 'Yadong Mu', 'Zhouchen Lin'] | ['cs.CV', 'cs.LG'] | Video generation requires modeling a vast spatiotemporal space, which demands
significant computational resources and data usage. To reduce the complexity,
the prevailing approaches employ a cascaded architecture to avoid direct
training with full resolution latent. Despite reducing computational demands,
the separate ... | 2024-10-08T12:10:37Z | ICLR 2025 | null | null | null | null | null | null | null | null | null |
2,410.05993 | Aria: An Open Multimodal Native Mixture-of-Experts Model | ['Dongxu Li', 'Yudong Liu', 'Haoning Wu', 'Yue Wang', 'Zhiqi Shen', 'Bowen Qu', 'Xinyao Niu', 'Fan Zhou', 'Chengen Huang', 'Yanpeng Li', 'Chongyan Zhu', 'Xiaoyi Ren', 'Chao Li', 'Yifan Ye', 'Peng Liu', 'Lihuan Zhang', 'Hanshu Yan', 'Guoyin Wang', 'Bei Chen', 'Junnan Li'] | ['cs.CV'] | Information comes in diverse modalities. Multimodal native AI models are
essential to integrate real-world information and deliver comprehensive
understanding. While proprietary multimodal native models exist, their lack of
openness imposes obstacles for adoptions, let alone adaptations. To fill this
gap, we introduce ... | 2024-10-08T12:44:57Z | null | null | null | Aria: An Open Multimodal Native Mixture-of-Experts Model | ['Dongxu Li', 'Yudong Liu', 'Haoning Wu', 'Yue Wang', 'Zhiqi Shen', 'Bowen Qu', 'Xinyao Niu', 'Guoyin Wang', 'Bei Chen', 'Junnan Li'] | 2,024 | arXiv.org | 65 | 24 | ['Computer Science'] |
2,410.06234 | TEOChat: A Large Vision-Language Assistant for Temporal Earth
Observation Data | ['Jeremy Andrew Irvin', 'Emily Ruoyu Liu', 'Joyce Chuyi Chen', 'Ines Dormoy', 'Jinyoung Kim', 'Samar Khanna', 'Zhuo Zheng', 'Stefano Ermon'] | ['cs.CV', 'cs.AI', 'cs.LG'] | Large vision and language assistants have enabled new capabilities for
interpreting natural images. These approaches have recently been adapted to
earth observation data, but they are only able to handle single image inputs,
limiting their use for many real-world tasks. In this work, we develop a new
vision and languag... | 2024-10-08T17:45:51Z | Published at ICLR 2025 | null | null | null | null | null | null | null | null | null |
2,410.06264 | Think While You Generate: Discrete Diffusion with Planned Denoising | ['Sulin Liu', 'Juno Nam', 'Andrew Campbell', 'Hannes Stärk', 'Yilun Xu', 'Tommi Jaakkola', 'Rafael Gómez-Bombarelli'] | ['cs.LG', 'cs.AI', 'cs.CL', 'cs.CV', 'stat.ML'] | Discrete diffusion has achieved state-of-the-art performance, outperforming
or approaching autoregressive models on standard benchmarks. In this work, we
introduce Discrete Diffusion with Planned Denoising (DDPD), a novel framework
that separates the generation process into two models: a planner and a
denoiser. At infe... | 2024-10-08T18:03:34Z | ICLR 2025 | null | null | null | null | null | null | null | null | null |
2,410.06364 | Sketch to Adapt: Fine-Tunable Sketches for Efficient LLM Adaptation | ['Tianyi Zhang', 'Junda Su', 'Aditya Desai', 'Oscar Wu', 'Zhaozhuo Xu', 'Anshumali Shrivastava'] | ['cs.LG'] | Adapting pre-trained large language models (LLMs) is crucial but challenging
due to their enormous size. Parameter-efficient fine-tuning (PEFT) techniques
typically employ additive adapters applied to frozen model weights. To further
reduce memory usage, model weights can be compressed through quantization.
However, ex... | 2024-10-08T20:58:24Z | null | null | null | Sketch to Adapt: Fine-Tunable Sketches for Efficient LLM Adaptation | ['Tianyi Zhang', 'Junda Su', 'Aditya Desai', 'Oscar Wu', 'Zhaozhuo Xu', 'Anshumali Shrivastava'] | 2,024 | null | 0 | 61 | ['Computer Science'] |
2,410.06542 | MedImageInsight: An Open-Source Embedding Model for General Domain
Medical Imaging | ['Noel C. F. Codella', 'Ying Jin', 'Shrey Jain', 'Yu Gu', 'Ho Hin Lee', 'Asma Ben Abacha', 'Alberto Santamaria-Pang', 'Will Guyman', 'Naiteek Sangani', 'Sheng Zhang', 'Hoifung Poon', 'Stephanie Hyland', 'Shruthi Bannur', 'Javier Alvarez-Valle', 'Xue Li', 'John Garrett', 'Alan McMillan', 'Gaurav Rajguru', 'Madhu Maddi',... | ['eess.IV', 'cs.CV'] | In this work, we present MedImageInsight, an open-source medical imaging
embedding model. MedImageInsight is trained on medical images with associated
text and labels across a diverse collection of domains, including X-Ray, CT,
MRI, dermoscopy, OCT, fundus photography, ultrasound, histopathology, and
mammography. Rigor... | 2024-10-09T04:36:47Z | null | null | null | null | null | null | null | null | null | null |
2,410.06551 | InstantIR: Blind Image Restoration with Instant Generative Reference | ['Jen-Yuan Huang', 'Haofan Wang', 'Qixun Wang', 'Xu Bai', 'Hao Ai', 'Peng Xing', 'Jen-Tse Huang'] | ['cs.CV', 'cs.AI', 'cs.LG'] | Handling test-time unknown degradation is the major challenge in Blind Image
Restoration (BIR), necessitating high model generalization. An effective
strategy is to incorporate prior knowledge, either from human input or
generative model. In this paper, we introduce Instant-reference Image
Restoration (InstantIR), a no... | 2024-10-09T05:15:29Z | null | null | null | InstantIR: Blind Image Restoration with Instant Generative Reference | ['Jen-Yuan Huang', 'Haofan Wang', 'Qixun Wang', 'Xu Bai', 'Hao Ai', 'Peng Xing', 'Jen-Tse Huang'] | 2,024 | arXiv.org | 1 | 61 | ['Computer Science'] |
2,410.06577 | Rodimus*: Breaking the Accuracy-Efficiency Trade-Off with Efficient
Attentions | ['Zhihao He', 'Hang Yu', 'Zi Gong', 'Shizhan Liu', 'Jianguo Li', 'Weiyao Lin'] | ['cs.CL'] | Recent advancements in Transformer-based large language models (LLMs) have
set new standards in natural language processing. However, the classical
softmax attention incurs significant computational costs, leading to a $O(T)$
complexity for per-token generation, where $T$ represents the context length.
This work explor... | 2024-10-09T06:22:36Z | Accepted by ICLR 2025. Camera-ready Version | null | null | null | null | null | null | null | null | null |
2,410.06581 | Enhancing Legal Case Retrieval via Scaling High-quality Synthetic
Query-Candidate Pairs | ['Cheng Gao', 'Chaojun Xiao', 'Zhenghao Liu', 'Huimin Chen', 'Zhiyuan Liu', 'Maosong Sun'] | ['cs.IR'] | Legal case retrieval (LCR) aims to provide similar cases as references for a
given fact description. This task is crucial for promoting consistent judgments
in similar cases, effectively enhancing judicial fairness and improving work
efficiency for judges. However, existing works face two main challenges for
real-world... | 2024-10-09T06:26:39Z | 15 pages, 3 figures, accepted by EMNLP 2024 | null | null | null | null | null | null | null | null | null |
2,410.06593 | Towards Natural Image Matting in the Wild via Real-Scenario Prior | ['Ruihao Xia', 'Yu Liang', 'Peng-Tao Jiang', 'Hao Zhang', 'Qianru Sun', 'Yang Tang', 'Bo Li', 'Pan Zhou'] | ['cs.CV'] | Recent approaches attempt to adapt powerful interactive segmentation models,
such as SAM, to interactive matting and fine-tune the models based on synthetic
matting datasets. However, models trained on synthetic data fail to generalize
to complex and occlusion scenes. We address this challenge by proposing a new
mattin... | 2024-10-09T06:43:19Z | null | null | null | null | null | null | null | null | null | null |
2,410.06614 | Pair-VPR: Place-Aware Pre-training and Contrastive Pair Classification
for Visual Place Recognition with Vision Transformers | ['Stephen Hausler', 'Peyman Moghadam'] | ['cs.RO', 'cs.AI', 'cs.CV'] | In this work we propose a novel joint training method for Visual Place
Recognition (VPR), which simultaneously learns a global descriptor and a pair
classifier for re-ranking. The pair classifier can predict whether a given pair
of images are from the same place or not. The network only comprises Vision
Transformer com... | 2024-10-09T07:09:46Z | null | null | 10.1109/LRA.2025.3546512 | Pair-VPR: Place-Aware Pre-Training and Contrastive Pair Classification for Visual Place Recognition With Vision Transformers | ['Stephen Hausler', 'Peyman Moghadam'] | 2,024 | IEEE Robotics and Automation Letters | 4 | 49 | ['Computer Science'] |
2,410.06734 | MimicTalk: Mimicking a personalized and expressive 3D talking face in
minutes | ['Zhenhui Ye', 'Tianyun Zhong', 'Yi Ren', 'Ziyue Jiang', 'Jiawei Huang', 'Rongjie Huang', 'Jinglin Liu', 'Jinzheng He', 'Chen Zhang', 'Zehan Wang', 'Xize Chen', 'Xiang Yin', 'Zhou Zhao'] | ['cs.CV'] | Talking face generation (TFG) aims to animate a target identity's face to
create realistic talking videos. Personalized TFG is a variant that emphasizes
the perceptual identity similarity of the synthesized result (from the
perspective of appearance and talking style). While previous works typically
solve this problem ... | 2024-10-09T10:12:37Z | Accepted by NeurIPS 2024 | null | null | null | null | null | null | null | null | null |
2,410.06885 | F5-TTS: A Fairytaler that Fakes Fluent and Faithful Speech with Flow
Matching | ['Yushen Chen', 'Zhikang Niu', 'Ziyang Ma', 'Keqi Deng', 'Chunhui Wang', 'Jian Zhao', 'Kai Yu', 'Xie Chen'] | ['eess.AS', 'cs.SD'] | This paper introduces F5-TTS, a fully non-autoregressive text-to-speech
system based on flow matching with Diffusion Transformer (DiT). Without
requiring complex designs such as duration model, text encoder, and phoneme
alignment, the text input is simply padded with filler tokens to the same
length as input speech, an... | 2024-10-09T13:46:34Z | 17 pages, 9 tables, 3 figures | null | null | null | null | null | null | null | null | null |
2,410.06961 | Self-Boosting Large Language Models with Synthetic Preference Data | ['Qingxiu Dong', 'Li Dong', 'Xingxing Zhang', 'Zhifang Sui', 'Furu Wei'] | ['cs.CL', 'cs.AI'] | Through alignment with human preferences, Large Language Models (LLMs) have
advanced significantly in generating honest, harmless, and helpful responses.
However, collecting high-quality preference data is a resource-intensive and
creativity-demanding process, especially for the continual improvement of LLMs.
We introd... | 2024-10-09T14:57:31Z | null | null | null | null | null | null | null | null | null | null |
2,410.07002 | CursorCore: Assist Programming through Aligning Anything | ['Hao Jiang', 'Qi Liu', 'Rui Li', 'Shengyu Ye', 'Shijin Wang'] | ['cs.CL', 'cs.AI', 'cs.SE'] | Large language models have been successfully applied to programming
assistance tasks, such as code completion, code insertion, and instructional
code editing. However, these applications remain insufficiently automated and
struggle to effectively integrate various types of information during the
programming process, in... | 2024-10-09T15:45:52Z | null | null | null | null | null | null | null | null | null | null |
2,410.07064 | Data Selection via Optimal Control for Language Models | ['Yuxian Gu', 'Li Dong', 'Hongning Wang', 'Yaru Hao', 'Qingxiu Dong', 'Furu Wei', 'Minlie Huang'] | ['cs.CL'] | This work investigates the selection of high-quality pre-training data from
massive corpora to enhance LMs' capabilities for downstream usage. We formulate
data selection as a generalized Optimal Control problem, which can be solved
theoretically by Pontryagin's Maximum Principle (PMP), yielding a set of
necessary cond... | 2024-10-09T17:06:57Z | ICLR 2025 Oral | null | null | Data Selection via Optimal Control for Language Models | ['Yuxian Gu', 'Li Dong', 'Hongning Wang', 'Y. Hao', 'Qingxiu Dong', 'Furu Wei', 'Minlie Huang'] | 2,024 | International Conference on Learning Representations | 9 | 87 | ['Computer Science'] |
2,410.07095 | MLE-bench: Evaluating Machine Learning Agents on Machine Learning
Engineering | ['Jun Shern Chan', 'Neil Chowdhury', 'Oliver Jaffe', 'James Aung', 'Dane Sherburn', 'Evan Mays', 'Giulio Starace', 'Kevin Liu', 'Leon Maksin', 'Tejal Patwardhan', 'Lilian Weng', 'Aleksander Mądry'] | ['cs.CL'] | We introduce MLE-bench, a benchmark for measuring how well AI agents perform
at machine learning engineering. To this end, we curate 75 ML
engineering-related competitions from Kaggle, creating a diverse set of
challenging tasks that test real-world ML engineering skills such as training
models, preparing datasets, and... | 2024-10-09T17:34:27Z | 10 pages, 17 pages appendix. Equal contribution by first seven
authors, authors randomized. ICLR version | null | null | null | null | null | null | null | null | null |
2,410.07133 | EvolveDirector: Approaching Advanced Text-to-Image Generation with Large
Vision-Language Models | ['Rui Zhao', 'Hangjie Yuan', 'Yujie Wei', 'Shiwei Zhang', 'Yuchao Gu', 'Lingmin Ran', 'Xiang Wang', 'Zhangjie Wu', 'Junhao Zhang', 'Yingya Zhang', 'Mike Zheng Shou'] | ['cs.CV'] | Recent advancements in generation models have showcased remarkable
capabilities in generating fantastic content. However, most of them are trained
on proprietary high-quality data, and some models withhold their parameters and
only provide accessible application programming interfaces (APIs), limiting
their benefits fo... | 2024-10-09T17:52:28Z | null | null | null | EvolveDirector: Approaching Advanced Text-to-Image Generation with Large Vision-Language Models | ['Rui Zhao', 'Hangjie Yuan', 'Yujie Wei', 'Shiwei Zhang', 'Yuchao Gu', 'L. Ran', 'Xiang Wang', 'Zhangjie Wu', 'Junhao Zhang', 'Yingya Zhang', 'Mike Zheng Shou'] | 2,024 | Neural Information Processing Systems | 4 | 76 | ['Computer Science'] |
2,410.07153 | CHASE: Learning Convex Hull Adaptive Shift for Skeleton-based
Multi-Entity Action Recognition | ['Yuhang Wen', 'Mengyuan Liu', 'Songtao Wu', 'Beichen Ding'] | ['cs.CV', 'cs.LG'] | Skeleton-based multi-entity action recognition is a challenging task aiming
to identify interactive actions or group activities involving multiple diverse
entities. Existing models for individuals often fall short in this task due to
the inherent distribution discrepancies among entity skeletons, leading to
suboptimal ... | 2024-10-09T17:55:43Z | NeurIPS 2024 Camera-ready Version. Project Website:
https://necolizer.github.io/CHASE/ | null | null | CHASE: Learning Convex Hull Adaptive Shift for Skeleton-based Multi-Entity Action Recognition | ['Yuhang Wen', 'Mengyuan Liu', 'Songtao Wu', 'Beichen Ding'] | 2,024 | Neural Information Processing Systems | 1 | 102 | ['Computer Science'] |
2,410.07157 | InstructG2I: Synthesizing Images from Multimodal Attributed Graphs | ['Bowen Jin', 'Ziqi Pang', 'Bingjun Guo', 'Yu-Xiong Wang', 'Jiaxuan You', 'Jiawei Han'] | ['cs.AI', 'cs.CL', 'cs.CV', 'cs.LG', 'cs.SI'] | In this paper, we approach an overlooked yet critical task Graph2Image:
generating images from multimodal attributed graphs (MMAGs). This task poses
significant challenges due to the explosion in graph size, dependencies among
graph entities, and the need for controllability in graph conditions. To
address these challe... | 2024-10-09T17:56:15Z | 16 pages | NeurIPs 2024 | null | null | null | null | null | null | null | null |
2,410.07163 | Simplicity Prevails: Rethinking Negative Preference Optimization for LLM
Unlearning | ['Chongyu Fan', 'Jiancheng Liu', 'Licong Lin', 'Jinghan Jia', 'Ruiqi Zhang', 'Song Mei', 'Sijia Liu'] | ['cs.CL', 'cs.AI', 'cs.LG'] | This work studies the problem of large language model (LLM) unlearning,
aiming to remove unwanted data influences (e.g., copyrighted or harmful
content) while preserving model utility. Despite the increasing demand for
unlearning, a technically-grounded optimization framework is lacking. Gradient
ascent (GA)-type metho... | 2024-10-09T17:58:12Z | null | null | null | null | null | null | null | null | null | null |
2,410.07167 | Deciphering Cross-Modal Alignment in Large Vision-Language Models with
Modality Integration Rate | ['Qidong Huang', 'Xiaoyi Dong', 'Pan Zhang', 'Yuhang Zang', 'Yuhang Cao', 'Jiaqi Wang', 'Dahua Lin', 'Weiming Zhang', 'Nenghai Yu'] | ['cs.CV', 'cs.CL'] | We present the Modality Integration Rate (MIR), an effective, robust, and
generalized metric to indicate the multi-modal pre-training quality of Large
Vision Language Models (LVLMs). Large-scale pre-training plays a critical role
in building capable LVLMs, while evaluating its training quality without the
costly superv... | 2024-10-09T17:59:04Z | Project page: https://github.com/shikiw/Modality-Integration-Rate | null | null | Deciphering Cross-Modal Alignment in Large Vision-Language Models with Modality Integration Rate | ['Qidong Huang', 'Xiao-wen Dong', 'Pan Zhang', 'Yuhang Zang', 'Yuhang Cao', 'Jiaqi Wang', 'Dahua Lin', 'Weiming Zhang', 'Neng H. Yu'] | 2,024 | arXiv.org | 9 | 53 | ['Computer Science'] |
2,410.07168 | Sylber: Syllabic Embedding Representation of Speech from Raw Audio | ['Cheol Jun Cho', 'Nicholas Lee', 'Akshat Gupta', 'Dhruv Agarwal', 'Ethan Chen', 'Alan W Black', 'Gopala K. Anumanchipalli'] | ['cs.CL', 'cs.SD', 'eess.AS'] | Syllables are compositional units of spoken language that efficiently
structure human speech perception and production. However, current neural
speech representations lack such structure, resulting in dense token sequences
that are costly to process. To bridge this gap, we propose a new model, Sylber,
that produces spe... | 2024-10-09T17:59:04Z | Accepted at ICLR 2025 | null | null | Sylber: Syllabic Embedding Representation of Speech from Raw Audio | ['Cheol Jun Cho', 'Nicholas Lee', 'Akshat Gupta', 'Dhruv Agarwal', 'Ethan Chen', 'Alan W. Black', 'G. Anumanchipalli'] | 2,024 | International Conference on Learning Representations | 4 | 69 | ['Computer Science', 'Engineering'] |
2,410.07169 | VIP: Vision Instructed Pre-training for Robotic Manipulation | ['Zhuoling Li', 'Liangliang Ren', 'Jinrong Yang', 'Yong Zhao', 'Xiaoyang Wu', 'Zhenhua Xu', 'Xiang Bai', 'Hengshuang Zhao'] | ['cs.RO'] | The effectiveness of scaling up training data in robotic manipulation is
still limited. A primary challenge in manipulation is the tasks are diverse,
and the trained policy would be confused if the task targets are not specified
clearly. Existing works primarily rely on text instruction to describe targets.
However, we... | 2024-10-09T17:59:06Z | null | null | null | VIP: Vision Instructed Pre-training for Robotic Manipulation | ['Zhuoling Li', 'Liangliang Ren', 'Jinrong Yang', 'Yong Zhao', 'Xiaoyang Wu', 'Zhenhua Xu', 'Xiang Bai', 'Hengshuang Zhao'] | 2,024 | null | 0 | 0 | ['Computer Science'] |
2,410.07171 | IterComp: Iterative Composition-Aware Feedback Learning from Model
Gallery for Text-to-Image Generation | ['Xinchen Zhang', 'Ling Yang', 'Guohao Li', 'Yaqi Cai', 'Jiake Xie', 'Yong Tang', 'Yujiu Yang', 'Mengdi Wang', 'Bin Cui'] | ['cs.CV'] | Advanced diffusion models like RPG, Stable Diffusion 3 and FLUX have made
notable strides in compositional text-to-image generation. However, these
methods typically exhibit distinct strengths for compositional generation, with
some excelling in handling attribute binding and others in spatial
relationships. This dispa... | 2024-10-09T17:59:13Z | ICLR 2025. Project: https://github.com/YangLing0818/IterComp | null | null | null | null | null | null | null | null | null |
2,410.07173 | Better Language Models Exhibit Higher Visual Alignment | ['Jona Ruthardt', 'Gertjan J. Burghouts', 'Serge Belongie', 'Yuki M. Asano'] | ['cs.CL', 'cs.AI', 'cs.CV'] | How well do text-only Large Language Models (LLMs) naturally align with the
visual world? We provide the first direct analysis by utilizing frozen text
representations in a discriminative vision-language model framework and
measuring zero-shot generalization on unseen classes. We find decoder-based
LLMs exhibit high in... | 2024-10-09T17:59:33Z | null | null | null | Better Language Models Exhibit Higher Visual Alignment | ['Jona Ruthardt', 'G. Burghouts', 'Serge J. Belongie', 'Yuki M. Asano'] | 2,024 | null | 0 | 58 | ['Computer Science'] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.