arxiv_id
float64
1.5k
2.51k
title
stringlengths
9
178
authors
stringlengths
2
22.8k
categories
stringlengths
4
146
summary
stringlengths
103
1.92k
published
stringdate
2015-02-06 10:44:00
2025-07-10 17:59:58
comments
stringlengths
2
417
journal_ref
stringclasses
321 values
doi
stringclasses
398 values
ss_title
stringlengths
8
159
ss_authors
stringlengths
11
8.38k
ss_year
float64
2.02k
2.03k
ss_venue
stringclasses
281 values
ss_citationCount
float64
0
134k
ss_referenceCount
float64
0
429
ss_fieldsOfStudy
stringclasses
47 values
2,503.08307
$^R$FLAV: Rolling Flow matching for infinite Audio Video generation
['Alex Ergasti', 'Giuseppe Gabriele Tarollo', 'Filippo Botti', 'Tomaso Fontanini', 'Claudio Ferrari', 'Massimo Bertozzi', 'Andrea Prati']
['cs.CV']
Joint audio-video (AV) generation is still a significant challenge in generative AI, primarily due to three critical requirements: quality of the generated samples, seamless multimodal synchronization and temporal coherence, with audio tracks that match the visual data and vice versa, and limitless video duration. In t...
2025-03-11T11:18:47Z
null
null
null
null
null
null
null
null
null
null
2,503.08363
Parametric Point Cloud Completion for Polygonal Surface Reconstruction
['Zhaiyu Chen', 'Yuqing Wang', 'Liangliang Nan', 'Xiao Xiang Zhu']
['cs.CV']
Existing polygonal surface reconstruction methods heavily depend on input completeness and struggle with incomplete point clouds. We argue that while current point cloud completion techniques may recover missing points, they are not optimized for polygonal surface reconstruction, where the parametric representation of ...
2025-03-11T12:20:24Z
CVPR 2025
null
null
Parametric Point Cloud Completion for Polygonal Surface Reconstruction
['Zhaiyu Chen', 'Yuqing Wang', 'Liangliang Nan', 'Xiao Xiang Zhu']
2,025
arXiv.org
0
50
['Computer Science']
2,503.08373
nnInteractive: Redefining 3D Promptable Segmentation
['Fabian Isensee', 'Maximilian Rokuss', 'Lars Krämer', 'Stefan Dinkelacker', 'Ashis Ravindran', 'Florian Stritzke', 'Benjamin Hamm', 'Tassilo Wald', 'Moritz Langenberg', 'Constantin Ulrich', 'Jonathan Deissler', 'Ralf Floca', 'Klaus Maier-Hein']
['cs.CV']
Accurate and efficient 3D segmentation is essential for both clinical and research applications. While foundation models like SAM have revolutionized interactive segmentation, their 2D design and domain shift limitations make them ill-suited for 3D medical images. Current adaptations address some of these challenges bu...
2025-03-11T12:30:34Z
Fabian Isensee, Maximilian Rokuss and Lars Kr\"amer contributed equally. Each co-first author may list themselves as lead author on their CV
null
null
nnInteractive: Redefining 3D Promptable Segmentation
['Fabian Isensee', 'Maximilian Rokuss', 'Lars Krämer', 'Stefan Dinkelacker', 'Ashis Ravindran', 'Florian Stritzke', 'Benjamin Hamm', 'Tassilo Wald', 'Moritz Langenberg', 'Constantin Ulrich', 'Jonathan Deissler', 'Ralf Floca', 'K. Maier-Hein']
2,025
arXiv.org
6
132
['Computer Science']
2,503.08505
CFNet: Optimizing Remote Sensing Change Detection through Content-Aware Enhancement
['Fan Wu', 'Sijun Dong', 'Xiaoliang Meng']
['cs.CV']
Change detection is a crucial and widely applied task in remote sensing, aimed at identifying and analyzing changes occurring in the same geographical area over time. Due to variability in acquisition conditions, bi-temporal remote sensing images often exhibit significant differences in image style. Even with the power...
2025-03-11T14:56:11Z
17 pages, 12 figures
null
null
null
null
null
null
null
null
null
2,503.08507
Referring to Any Person
['Qing Jiang', 'Lin Wu', 'Zhaoyang Zeng', 'Tianhe Ren', 'Yuda Xiong', 'Yihao Chen', 'Qin Liu', 'Lei Zhang']
['cs.CV']
Humans are undoubtedly the most important participants in computer vision, and the ability to detect any individual given a natural language description, a task we define as referring to any person, holds substantial practical value. However, we find that existing models generally fail to achieve real-world usability, ...
2025-03-11T14:57:14Z
null
null
null
Referring to Any Person
['Qing Jiang', 'Lin Wu', 'Zhaoyang Zeng', 'Tianhe Ren', 'Yuda Xiong', 'Yihao Chen', 'Qin Liu', 'Lei Zhang']
2,025
arXiv.org
2
86
['Computer Science']
2,503.0854
Mellow: a small audio language model for reasoning
['Soham Deshmukh', 'Satvik Dixit', 'Rita Singh', 'Bhiksha Raj']
['cs.SD', 'cs.AI', 'eess.AS']
Multimodal Audio-Language Models (ALMs) can understand and reason over both audio and text. Typically, reasoning performance correlates with model size, with the best results achieved by models exceeding 8 billion parameters. However, no prior work has explored enabling small audio-language models to perform reasoning ...
2025-03-11T15:29:00Z
Checkpoint and dataset available at: https://github.com/soham97/mellow
null
null
Mellow: a small audio language model for reasoning
['Soham Deshmukh', 'Satvik Dixit', 'Rita Singh', 'Bhiksha Raj']
2,025
arXiv.org
4
74
['Computer Science', 'Engineering']
2,503.08561
ComicsPAP: understanding comic strips by picking the correct panel
['Emanuele Vivoli', 'Artemis Llabrés', 'Mohamed Ali Souibgui', 'Marco Bertini', 'Ernest Valveny Llobet', 'Dimosthenis Karatzas']
['cs.CV']
Large multimodal models (LMMs) have made impressive strides in image captioning, VQA, and video comprehension, yet they still struggle with the intricate temporal and spatial cues found in comics. To address this gap, we introduce ComicsPAP, a large-scale benchmark designed for comic strip understanding. Comprising ove...
2025-03-11T15:50:20Z
null
null
null
null
null
null
null
null
null
null
2,503.08569
DeepReview: Improving LLM-based Paper Review with Human-like Deep Thinking Process
['Minjun Zhu', 'Yixuan Weng', 'Linyi Yang', 'Yue Zhang']
['cs.CL', 'cs.LG']
Large Language Models (LLMs) are increasingly utilized in scientific research assessment, particularly in automated paper review. However, existing LLM-based review systems face significant challenges, including limited domain expertise, hallucinated reasoning, and a lack of structured evaluation. To address these limi...
2025-03-11T15:59:43Z
null
null
null
null
null
null
null
null
null
null
2,503.08619
LightGen: Efficient Image Generation through Knowledge Distillation and Direct Preference Optimization
['Xianfeng Wu', 'Yajing Bai', 'Haoze Zheng', 'Harold Haodong Chen', 'Yexin Liu', 'Zihao Wang', 'Xuran Ma', 'Wen-Jie Shu', 'Xianzu Wu', 'Harry Yang', 'Ser-Nam Lim']
['cs.CV']
Recent advances in text-to-image generation have primarily relied on extensive datasets and parameter-heavy architectures. These requirements severely limit accessibility for researchers and practitioners who lack substantial computational resources. In this paper, we introduce \model, an efficient training paradigm fo...
2025-03-11T16:58:02Z
Code: https://github.com/XianfengWu01/LightGen
null
null
null
null
null
null
null
null
null
2,503.08638
YuE: Scaling Open Foundation Models for Long-Form Music Generation
['Ruibin Yuan', 'Hanfeng Lin', 'Shuyue Guo', 'Ge Zhang', 'Jiahao Pan', 'Yongyi Zang', 'Haohe Liu', 'Yiming Liang', 'Wenye Ma', 'Xingjian Du', 'Xinrun Du', 'Zhen Ye', 'Tianyu Zheng', 'Yinghao Ma', 'Minghao Liu', 'Zeyue Tian', 'Ziya Zhou', 'Liumeng Xue', 'Xingwei Qu', 'Yizhi Li', 'Shangda Wu', 'Tianhao Shen', 'Ziyang Ma'...
['eess.AS', 'cs.AI', 'cs.MM', 'cs.SD']
We tackle the task of long-form music generation--particularly the challenging \textbf{lyrics-to-song} problem--by introducing YuE, a family of open foundation models based on the LLaMA2 architecture. Specifically, YuE scales to trillions of tokens and generates up to five minutes of music while maintaining lyrical ali...
2025-03-11T17:26:50Z
https://github.com/multimodal-art-projection/YuE
null
null
null
null
null
null
null
null
null
2,503.08639
GBlobs: Explicit Local Structure via Gaussian Blobs for Improved Cross-Domain LiDAR-based 3D Object Detection
['Dušan Malić', 'Christian Fruhwirth-Reisinger', 'Samuel Schulter', 'Horst Possegger']
['cs.CV']
LiDAR-based 3D detectors need large datasets for training, yet they struggle to generalize to novel domains. Domain Generalization (DG) aims to mitigate this by training detectors that are invariant to such domain shifts. Current DG approaches exclusively rely on global geometric features (point cloud Cartesian coordin...
2025-03-11T17:29:56Z
Accepted at CVPR 2025
null
null
GBlobs: Explicit Local Structure via Gaussian Blobs for Improved Cross-Domain LiDAR-based 3D Object Detection
["Duvsan Mali'c", 'Christian Fruhwirth-Reisinger', 'Samuel Schulter', 'Horst Possegger']
2,025
arXiv.org
0
68
['Computer Science']
2,503.08662
Exploring the Word Sense Disambiguation Capabilities of Large Language Models
['Pierpaolo Basile', 'Lucia Siciliani', 'Elio Musacchio', 'Giovanni Semeraro']
['cs.CL', 'cs.AI']
Word Sense Disambiguation (WSD) is a historical task in computational linguistics that has received much attention over the years. However, with the advent of Large Language Models (LLMs), interest in this task (in its classical definition) has decreased. In this study, we evaluate the performance of various LLMs on th...
2025-03-11T17:50:44Z
null
null
null
null
null
null
null
null
null
null
2,503.08686
OmniMamba: Efficient and Unified Multimodal Understanding and Generation via State Space Models
['Jialv Zou', 'Bencheng Liao', 'Qian Zhang', 'Wenyu Liu', 'Xinggang Wang']
['cs.CV']
Recent advancements in unified multimodal understanding and visual generation (or multimodal generation) models have been hindered by their quadratic computational complexity and dependence on large-scale training data. We present OmniMamba, the first linear-architecture-based multimodal generation model that generates...
2025-03-11T17:59:46Z
null
null
null
null
null
null
null
null
null
null
2,503.08805
Filter Like You Test: Data-Driven Data Filtering for CLIP Pretraining
['Mikey Shechter', 'Yair Carmon']
['cs.CV', 'cs.LG']
We introduce Filter Like You Test (FLYT), an algorithm for curating large-scale vision-language datasets that learns the usefulness of each data point as a pretraining example. FLYT trains a scoring model that learns to weigh each example's features using gradient signals from downstream tasks training sets. Based on F...
2025-03-11T18:34:12Z
null
null
null
Filter Like You Test: Data-Driven Data Filtering for CLIP Pretraining
['Mikey Shechter', 'Y. Carmon']
2,025
arXiv.org
0
48
['Computer Science']
2,503.0889
PlainQAFact: Automatic Factuality Evaluation Metric for Biomedical Plain Language Summaries Generation
['Zhiwen You', 'Yue Guo']
['cs.CL']
Hallucinated outputs from language models pose risks in the medical domain, especially for lay audiences making health-related decisions. Existing factuality evaluation methods, such as entailment- and question-answering-based (QA), struggle with plain language summary (PLS) generation due to elaborative explanation ph...
2025-03-11T20:59:53Z
null
null
null
PlainQAFact: Automatic Factuality Evaluation Metric for Biomedical Plain Language Summaries Generation
['Zhiwen You', 'Yue Guo']
2,025
arXiv.org
0
49
['Computer Science']
2,503.08942
Extragradient Preference Optimization (EGPO): Beyond Last-Iterate Convergence for Nash Learning from Human Feedback
['Runlong Zhou', 'Maryam Fazel', 'Simon S. Du']
['cs.LG']
Reinforcement learning from human feedback (RLHF) has become essential for improving language model capabilities, but traditional approaches rely on the assumption that human preferences follow a transitive Bradley-Terry model. This assumption fails to capture the non-transitive nature of populational human preferences...
2025-03-11T22:44:54Z
COLM 2025
null
null
null
null
null
null
null
null
null
2,503.08965
LLM-Driven Usefulness Labeling for IR Evaluation
['Mouly Dewan', 'Jiqun Liu', 'Chirag Shah']
['cs.IR']
In the information retrieval (IR) domain, evaluation plays a crucial role in optimizing search experiences and supporting diverse user intents. In the recent LLM era, research has been conducted to automate document relevance labels, as these labels have traditionally been assigned by crowd-sourced workers - a process ...
2025-03-12T00:07:39Z
null
null
null
LLM-Driven Usefulness Labeling for IR Evaluation
['Mouly Dewan', 'Jiqun Liu', 'Chirag Shah']
2,025
arXiv.org
0
30
['Computer Science']
2,503.09089
LocAgent: Graph-Guided LLM Agents for Code Localization
['Zhaoling Chen', 'Xiangru Tang', 'Gangda Deng', 'Fang Wu', 'Jialong Wu', 'Zhiwei Jiang', 'Viktor Prasanna', 'Arman Cohan', 'Xingyao Wang']
['cs.SE', 'cs.AI', 'cs.CL']
Code localization--identifying precisely where in a codebase changes need to be made--is a fundamental yet challenging task in software maintenance. Existing approaches struggle to efficiently navigate complex codebases when identifying relevant code sections. The challenge lies in bridging natural language problem des...
2025-03-12T05:55:01Z
null
null
null
null
null
null
null
null
null
null
2,503.09146
Generative Frame Sampler for Long Video Understanding
['Linli Yao', 'Haoning Wu', 'Kun Ouyang', 'Yuanxing Zhang', 'Caiming Xiong', 'Bei Chen', 'Xu Sun', 'Junnan Li']
['cs.CV', 'cs.MM']
Despite recent advances in Video Large Language Models (VideoLLMs), effectively understanding long-form videos remains a significant challenge. Perceiving lengthy videos containing thousands of frames poses substantial computational burden. To mitigate this issue, this paper introduces Generative Frame Sampler (GenS), ...
2025-03-12T08:16:39Z
null
null
null
null
null
null
null
null
null
null
2,503.09197
Teaching LMMs for Image Quality Scoring and Interpreting
['Zicheng Zhang', 'Haoning Wu', 'Ziheng Jia', 'Weisi Lin', 'Guangtao Zhai']
['cs.CV']
Image quality scoring and interpreting are two fundamental components of Image Quality Assessment (IQA). The former quantifies image quality, while the latter enables descriptive question answering about image quality. Traditionally, these two tasks have been addressed independently. However, from the perspective of th...
2025-03-12T09:39:33Z
null
null
null
Teaching LMMs for Image Quality Scoring and Interpreting
['Zicheng Zhang', 'Haoning Wu', 'Ziheng Jia', 'Weisi Lin', 'Guangtao Zhai']
2,025
arXiv.org
2
69
['Computer Science']
2,503.09279
Cockatiel: Ensembling Synthetic and Human Preferenced Training for Detailed Video Caption
['Luozheng Qin', 'Zhiyu Tan', 'Mengping Yang', 'Xiaomeng Yang', 'Hao Li']
['cs.CV']
Video Detailed Captioning (VDC) is a crucial task for vision-language bridging, enabling fine-grained descriptions of complex video content. In this paper, we first comprehensively benchmark current state-of-the-art approaches and systematically identified two critical limitations: biased capability towards specific ca...
2025-03-12T11:25:04Z
For more details, please refer to our project page: https://sais-fuxi.github.io/projects/cockatiel/
null
null
null
null
null
null
null
null
null
2,503.09313
xVLM2Vec: Adapting LVLM-based embedding models to multilinguality using Self-Knowledge Distillation
['Elio Musacchio', 'Lucia Siciliani', 'Pierpaolo Basile', 'Giovanni Semeraro']
['cs.CL', 'cs.IR']
In the current literature, most embedding models are based on the encoder-only transformer architecture to extract a dense and meaningful representation of the given input, which can be a text, an image, and more. With the recent advances in language modeling thanks to the introduction of Large Language Models, the pos...
2025-03-12T12:04:05Z
fix typo in number of tasks in MMEB; fix url for source code; added missing reference to XTD10
null
null
xVLM2Vec: Adapting LVLM-based embedding models to multilinguality using Self-Knowledge Distillation
['Elio Musacchio', 'Lucia Siciliani', 'Pierpaolo Basile', 'Giovanni Semeraro']
2,025
arXiv.org
0
32
['Computer Science']
2,503.09532
SAEBench: A Comprehensive Benchmark for Sparse Autoencoders in Language Model Interpretability
['Adam Karvonen', 'Can Rager', 'Johnny Lin', 'Curt Tigges', 'Joseph Bloom', 'David Chanin', 'Yeu-Tong Lau', 'Eoin Farrell', 'Callum McDougall', 'Kola Ayonrinde', 'Demian Till', 'Matthew Wearden', 'Arthur Conmy', 'Samuel Marks', 'Neel Nanda']
['cs.LG', 'cs.CL']
Sparse autoencoders (SAEs) are a popular technique for interpreting language model activations, and there is extensive recent work on improving SAE effectiveness. However, most prior work evaluates progress using unsupervised proxy metrics with unclear practical relevance. We introduce SAEBench, a comprehensive evaluat...
2025-03-12T16:49:02Z
Accepted to ICML 2025 main conference
null
null
SAEBench: A Comprehensive Benchmark for Sparse Autoencoders in Language Model Interpretability
['Adam Karvonen', 'Can Rager', 'Johnny Lin', 'Curt Tigges', 'Joseph Bloom', 'David Chanin', 'Yeu-Tong Lau', 'Eoin Farrell', 'Callum McDougall', 'Kola Ayonrinde', 'Matthew Wearden', 'Arthur Conmy', 'Samuel Marks', 'Neel Nanda']
2,025
arXiv.org
23
29
['Computer Science']
2,503.09573
Block Diffusion: Interpolating Between Autoregressive and Diffusion Language Models
['Marianne Arriola', 'Aaron Gokaslan', 'Justin T. Chiu', 'Zhihan Yang', 'Zhixuan Qi', 'Jiaqi Han', 'Subham Sekhar Sahoo', 'Volodymyr Kuleshov']
['cs.LG', 'cs.AI']
Diffusion language models offer unique benefits over autoregressive models due to their potential for parallelized generation and controllability, yet they lag in likelihood modeling and are limited to fixed-length generation. In this work, we introduce a class of block diffusion language models that interpolate betwee...
2025-03-12T17:43:40Z
ICLR 2025 Oral. We provide the code at https://github.com/kuleshov-group/bd3lms
null
null
null
null
null
null
null
null
null
2,503.0959
BIMBA: Selective-Scan Compression for Long-Range Video Question Answering
['Md Mohaiminul Islam', 'Tushar Nagarajan', 'Huiyu Wang', 'Gedas Bertasius', 'Lorenzo Torresani']
['cs.CV']
Video Question Answering (VQA) in long videos poses the key challenge of extracting relevant information and modeling long-range dependencies from many redundant frames. The self-attention mechanism provides a general solution for sequence modeling, but it has a prohibitive cost when applied to a massive number of spat...
2025-03-12T17:57:32Z
Accepted by CVPR 2025
null
null
null
null
null
null
null
null
null
2,503.096
MoC: Mixtures of Text Chunking Learners for Retrieval-Augmented Generation System
['Jihao Zhao', 'Zhiyuan Ji', 'Zhaoxin Fan', 'Hanyu Wang', 'Simin Niu', 'Bo Tang', 'Feiyu Xiong', 'Zhiyu Li']
['cs.CL']
Retrieval-Augmented Generation (RAG), while serving as a viable complement to large language models (LLMs), often overlooks the crucial aspect of text chunking within its pipeline. This paper initially introduces a dual-metric evaluation method, comprising Boundary Clarity and Chunk Stickiness, to enable the direct qua...
2025-03-12T17:59:42Z
null
null
null
null
null
null
null
null
null
null
2,503.09641
SANA-Sprint: One-Step Diffusion with Continuous-Time Consistency Distillation
['Junsong Chen', 'Shuchen Xue', 'Yuyang Zhao', 'Jincheng Yu', 'Sayak Paul', 'Junyu Chen', 'Han Cai', 'Song Han', 'Enze Xie']
['cs.GR']
This paper presents SANA-Sprint, an efficient diffusion model for ultra-fast text-to-image (T2I) generation. SANA-Sprint is built on a pre-trained foundation model and augmented with hybrid distillation, dramatically reducing inference steps from 20 to 1-4. We introduce three key innovations: (1) We propose a training-...
2025-03-12T04:53:07Z
22 pages, 11 figures, 8 tables, In submission
null
null
null
null
null
null
null
null
null
2,503.09642
Open-Sora 2.0: Training a Commercial-Level Video Generation Model in $200k
['Xiangyu Peng', 'Zangwei Zheng', 'Chenhui Shen', 'Tom Young', 'Xinying Guo', 'Binluo Wang', 'Hang Xu', 'Hongxin Liu', 'Mingyan Jiang', 'Wenjun Li', 'Yuhui Wang', 'Anbang Ye', 'Gang Ren', 'Qianran Ma', 'Wanying Liang', 'Xiang Lian', 'Xiwen Wu', 'Yuting Zhong', 'Zhuangyan Li', 'Chaoyu Gong', 'Guojun Lei', 'Leijun Cheng'...
['cs.GR', 'cs.AI']
Video generation models have achieved remarkable progress in the past year. The quality of AI video continues to improve, but at the cost of larger model size, increased data quantity, and greater demand for training compute. In this report, we present Open-Sora 2.0, a commercial-level video generation model trained fo...
2025-03-12T05:00:07Z
null
null
null
Open-Sora 2.0: Training a Commercial-Level Video Generation Model in $200k
['Xiangyu Peng', 'Zangwei Zheng', 'Chenhui Shen', 'Tom Young', 'Xinying Guo', 'Binluo Wang', 'Hang Xu', 'Hongxin Liu', 'Mingyan Jiang', 'Wenjun Li', 'Yuhui Wang', 'Anbang Ye', 'Gang Ren', 'Qianran Ma', 'Wanying Liang', 'Xiang Lian', 'Xiwen Wu', 'Yu Zhong', 'Zhuangyan Li', 'Chaoyu Gong', 'Guojun Lei', 'Leijun Cheng', 'L...
2,025
arXiv.org
13
44
['Computer Science']
2,503.09662
CoRe^2: Collect, Reflect and Refine to Generate Better and Faster
['Shitong Shao', 'Zikai Zhou', 'Dian Xie', 'Yuetong Fang', 'Tian Ye', 'Lichen Bai', 'Zeke Xie']
['cs.CV']
Making text-to-image (T2I) generative model sample both fast and well represents a promising research direction. Previous studies have typically focused on either enhancing the visual quality of synthesized images at the expense of sampling efficiency or dramatically accelerating sampling without improving the base mod...
2025-03-12T15:15:25Z
null
null
null
CoRe2: Collect, Reflect and Refine to Generate Better and Faster
['Shitong Shao', 'Zikai Zhou', 'Dian Xie', 'Yuetong Fang', 'Tian Ye', 'Lichen Bai', 'Zeke Xie']
2,025
arXiv.org
0
43
['Computer Science']
2,503.10076
VMBench: A Benchmark for Perception-Aligned Video Motion Generation
['Xinran Ling', 'Chen Zhu', 'Meiqi Wu', 'Hangyu Li', 'Xiaokun Feng', 'Cundian Yang', 'Aiming Hao', 'Jiashu Zhu', 'Jiahong Wu', 'Xiangxiang Chu']
['cs.CV']
Video generation has advanced rapidly, improving evaluation methods, yet assessing video's motion remains a major challenge. Specifically, there are two key issues: 1) current motion metrics do not fully align with human perceptions; 2) the existing motion prompts are limited. Based on these findings, we introduce VMBe...
2025-03-13T05:54:42Z
null
null
null
null
null
null
null
null
null
null
2,503.10267
An Expanded Massive Multilingual Dataset for High-Performance Language Technologies (HPLT)
['Laurie Burchell', 'Ona de Gibert', 'Nikolay Arefyev', 'Mikko Aulamo', 'Marta Bañón', 'Pinzhen Chen', 'Mariia Fedorova', 'Liane Guillou', 'Barry Haddow', 'Jan Hajič', 'Jindřich Helcl', 'Erik Henriksson', 'Mateusz Klimaszewski', 'Ville Komulainen', 'Andrey Kutuzov', 'Joona Kytöniemi', 'Veronika Laippala', 'Petter Mæhlu...
['cs.CL']
Training state-of-the-art large language models requires vast amounts of clean and diverse textual data. However, building suitable multilingual datasets remains a challenge. In this work, we present HPLT v2, a collection of high-quality multilingual monolingual and parallel corpora, extending prior work of the HPLT pr...
2025-03-13T11:24:09Z
ACL'2025 Main Proceedings
null
null
An Expanded Massive Multilingual Dataset for High-Performance Language Technologies
['Laurie Burchell', 'Ona de Gibert', 'Nikolay Arefyev', 'Mikko Aulamo', 'Marta Bañón', 'Pinzhen Chen', 'Mariia Fedorova', 'Liane Guillou', 'Barry Haddow', 'Jan Hajivc', 'and Jindvrich Helcl', 'Erik Henriksson', 'Mateusz Klimaszewski', 'Ville Komulainen', 'Andrey Kutuzov', 'Joona Kytoniemi', 'Veronika Laippala', 'Petter...
2,025
arXiv.org
4
64
['Computer Science']
2,503.10286
VicaSplat: A Single Run is All You Need for 3D Gaussian Splatting and Camera Estimation from Unposed Video Frames
['Zhiqi Li', 'Chengrui Dong', 'Yiming Chen', 'Zhangchi Huang', 'Peidong Liu']
['cs.CV']
We present VicaSplat, a novel framework for joint 3D Gaussians reconstruction and camera pose estimation from a sequence of unposed video frames, which is a critical yet underexplored task in real-world 3D applications. The core of our method lies in a novel transformer-based network architecture. In particular, our mo...
2025-03-13T11:56:05Z
null
null
null
VicaSplat: A Single Run is All You Need for 3D Gaussian Splatting and Camera Estimation from Unposed Video Frames
['Zhiqi Li', 'Chengrui Dong', 'Yiming Chen', 'Zhangchi Huang', 'Peidong Liu']
2,025
arXiv.org
2
49
['Computer Science']
2,503.10291
VisualPRM: An Effective Process Reward Model for Multimodal Reasoning
['Weiyun Wang', 'Zhangwei Gao', 'Lianjie Chen', 'Zhe Chen', 'Jinguo Zhu', 'Xiangyu Zhao', 'Yangzhou Liu', 'Yue Cao', 'Shenglong Ye', 'Xizhou Zhu', 'Lewei Lu', 'Haodong Duan', 'Yu Qiao', 'Jifeng Dai', 'Wenhai Wang']
['cs.CV', 'cs.CL']
We introduce VisualPRM, an advanced multimodal Process Reward Model (PRM) with 8B parameters, which improves the reasoning abilities of existing Multimodal Large Language Models (MLLMs) across different model scales and families with Best-of-N (BoN) evaluation strategies. Specifically, our model improves the reasoning ...
2025-03-13T12:03:37Z
null
null
null
VisualPRM: An Effective Process Reward Model for Multimodal Reasoning
['Weiyun Wang', 'Zhangwei Gao', 'Lianjie Chen', 'Zhe Chen', 'Jinguo Zhu', 'Xiangyu Zhao', 'Yangzhou Liu', 'Yue Cao', 'Shenglong Ye', 'Xizhou Zhu', 'Lewei Lu', 'Haodong Duan', 'Yu Qiao', 'Jifeng Dai', 'Wenhai Wang']
2,025
arXiv.org
39
98
['Computer Science']
2,503.10354
A Hybrid Architecture with Efficient Fine Tuning for Abstractive Patent Document Summarization
['Nevidu Jayatilleke', 'Ruvan Weerasinghe']
['cs.CL']
Automatic patent summarization approaches that help in the patent analysis and comprehension procedure are in high demand due to the colossal growth of innovations. The development of natural language processing (NLP), text mining, and deep learning has notably amplified the efficacy of text summarization models for ab...
2025-03-13T13:30:54Z
8th International Research Conference on Smart Computing and Systems Engineering, University of Kelaniya, Sri Lanka
null
10.1109/SCSE65633.2025.11030964
null
null
null
null
null
null
null
2,503.10365
Piece it Together: Part-Based Concepting with IP-Priors
['Elad Richardson', 'Kfir Goldberg', 'Yuval Alaluf', 'Daniel Cohen-Or']
['cs.CV']
Advanced generative models excel at synthesizing images but often rely on text-based conditioning. Visual designers, however, often work beyond language, directly drawing inspiration from existing visual elements. In many cases, these elements represent only fragments of a potential concept-such as an uniquely structur...
2025-03-13T13:46:10Z
Project page available at https://eladrich.github.io/PiT/
null
null
null
null
null
null
null
null
null
2,503.10392
RoMA: Scaling up Mamba-based Foundation Models for Remote Sensing
['Fengxiang Wang', 'Hongzhen Wang', 'Yulin Wang', 'Di Wang', 'Mingshuo Chen', 'Haiyan Zhao', 'Yangang Sun', 'Shuo Wang', 'Long Lan', 'Wenjing Yang', 'Jing Zhang']
['cs.CV', 'cs.AI']
Recent advances in self-supervised learning for Vision Transformers (ViTs) have fueled breakthroughs in remote sensing (RS) foundation models. However, the quadratic complexity of self-attention poses a significant barrier to scalability, particularly for large models and high-resolution images. While the linear-comple...
2025-03-13T14:09:18Z
null
null
null
RoMA: Scaling up Mamba-based Foundation Models for Remote Sensing
['Fengxiang Wang', 'Hongzhen Wang', 'Yulin Wang', 'Di Wang', 'Mingshuo Chen', 'Haiyan Zhao', 'Yangang Sun', 'Shuo Wang', 'Long Lan', 'Wenjing Yang', 'Jing Zhang']
2,025
arXiv.org
3
70
['Computer Science']
2,503.10437
4D LangSplat: 4D Language Gaussian Splatting via Multimodal Large Language Models
['Wanhua Li', 'Renping Zhou', 'Jiawei Zhou', 'Yingwei Song', 'Johannes Herter', 'Minghan Qin', 'Gao Huang', 'Hanspeter Pfister']
['cs.CV']
Learning 4D language fields to enable time-sensitive, open-ended language queries in dynamic scenes is essential for many real-world applications. While LangSplat successfully grounds CLIP features into 3D Gaussian representations, achieving precision and efficiency in 3D static scenes, it lacks the ability to handle d...
2025-03-13T14:58:22Z
CVPR 2025. Project Page: https://4d-langsplat.github.io
null
null
4D LangSplat: 4D Language Gaussian Splatting via Multimodal Large Language Models
['Wanhua Li', 'Renping Zhou', 'Jiawei Zhou', 'Yingwei Song', 'Johannes Herter', 'Minghan Qin', 'Gao Huang', 'Hanspeter Pfister']
2,025
arXiv.org
3
69
['Computer Science']
2,503.1046
Light-R1: Curriculum SFT, DPO and RL for Long COT from Scratch and Beyond
['Liang Wen', 'Yunke Cai', 'Fenrui Xiao', 'Xin He', 'Qi An', 'Zhenyu Duan', 'Yimin Du', 'Junchen Liu', 'Lifu Tang', 'Xiaowei Lv', 'Haosheng Zou', 'Yongchao Deng', 'Shousheng Jia', 'Xiangzheng Zhang']
['cs.CL', 'cs.LG']
This paper introduces Light-R1, an open-source suite for training long reasoning models using reproducible and cost-effective methodology. Given the proprietary nature of data used in the DeepSeek-R1 series, we develop an alternative approach leveraging exclusively public data and models. Our curriculum training progre...
2025-03-13T15:29:22Z
v4: ACL'25 industry track camera ready; v3: minor modifications; v2: better writing & format for later submission; all release at https://github.com/Qihoo360/Light-R1
null
null
null
null
null
null
null
null
null
2,503.10522
AudioX: Diffusion Transformer for Anything-to-Audio Generation
['Zeyue Tian', 'Yizhu Jin', 'Zhaoyang Liu', 'Ruibin Yuan', 'Xu Tan', 'Qifeng Chen', 'Wei Xue', 'Yike Guo']
['cs.MM', 'cs.CV', 'cs.LG', 'cs.SD', 'eess.AS']
Audio and music generation have emerged as crucial tasks in many applications, yet existing approaches face significant limitations: they operate in isolation without unified capabilities across modalities, suffer from scarce high-quality, multi-modal training data, and struggle to effectively integrate diverse inputs....
2025-03-13T16:30:59Z
The code and datasets will be available at https://zeyuet.github.io/AudioX/
null
null
AudioX: Diffusion Transformer for Anything-to-Audio Generation
['Zeyue Tian', 'Yizhu Jin', 'Zhaoyang Liu', 'Ruibin Yuan', 'Xu Tan', 'Qifeng Chen', 'Wei Xue', 'Yi-Ting Guo']
2,025
arXiv.org
6
72
['Computer Science', 'Engineering']
2,503.10568
Autoregressive Image Generation with Randomized Parallel Decoding
['Haopeng Li', 'Jinyue Yang', 'Guoqi Li', 'Huan Wang']
['cs.CV']
We introduce ARPG, a novel visual autoregressive model that enables randomized parallel generation, addressing the inherent limitations of conventional raster-order approaches, which hinder inference efficiency and zero-shot generalization due to their sequential, predefined token generation order. Our key insight is t...
2025-03-13T17:19:51Z
null
null
null
null
null
null
null
null
null
null
2,503.10582
VisualWebInstruct: Scaling up Multimodal Instruction Data through Web Search
['Yiming Jia', 'Jiachen Li', 'Xiang Yue', 'Bo Li', 'Ping Nie', 'Kai Zou', 'Wenhu Chen']
['cs.CV', 'cs.AI', 'cs.CL']
Vision-Language Models have made significant progress on many perception-focused tasks. However, their progress on reasoning-focused tasks remains limited due to the lack of high-quality and diverse training data. In this work, we aim to address the scarcity of reasoning-focused multimodal datasets. We propose VisualWe...
2025-03-13T17:32:48Z
Technical Report
null
null
VisualWebInstruct: Scaling up Multimodal Instruction Data through Web Search
['Yiming Jia', 'Jiachen Li', 'Xiang Yue', 'Bo Li', 'Ping Nie', 'Kai Zou', 'Wenhu Chen']
2,025
arXiv.org
4
65
['Computer Science']
2,503.1062
From TOWER to SPIRE: Adding the Speech Modality to a Text-Only LLM
['Kshitij Ambilduke', 'Ben Peters', 'Sonal Sannigrahi', 'Anil Keshwani', 'Tsz Kin Lam', 'Bruno Martins', 'Marcely Zanon Boito', 'André F. T. Martins']
['cs.CL']
Large language models (LLMs) have shown remarkable performance and generalization capabilities across multiple languages and tasks, making them very attractive targets for multi-modality integration (e.g., images or speech). In this work, we extend an existing LLM to the speech modality via speech discretization and co...
2025-03-13T17:57:32Z
null
null
null
null
null
null
null
null
null
null
2,503.10621
DriveLMM-o1: A Step-by-Step Reasoning Dataset and Large Multimodal Model for Driving Scenario Understanding
['Ayesha Ishaq', 'Jean Lahoud', 'Ketan More', 'Omkar Thawakar', 'Ritesh Thawkar', 'Dinura Dissanayake', 'Noor Ahsan', 'Yuhao Li', 'Fahad Shahbaz Khan', 'Hisham Cholakkal', 'Ivan Laptev', 'Rao Muhammad Anwer', 'Salman Khan']
['cs.CV', 'cs.RO']
While large multimodal models (LMMs) have demonstrated strong performance across various Visual Question Answering (VQA) tasks, certain challenges require complex multi-step reasoning to reach accurate answers. One particularly challenging task is autonomous driving, which demands thorough cognitive processing before d...
2025-03-13T17:59:01Z
8 pages, 4 figures, 3 tables, github: https://github.com/ayesha-ishaq/DriveLMM-o1
null
null
DriveLMM-o1: A Step-by-Step Reasoning Dataset and Large Multimodal Model for Driving Scenario Understanding
['Ayesha Ishaq', 'Jean Lahoud', 'Ketan More', 'Omkar Thawakar', 'Ritesh Thawkar', 'Dinura Dissanayake', 'Noor Ahsan', 'Yuhao Li', 'F. Khan', 'Hisham Cholakkal', 'Ivan Laptev', 'R. Anwer', 'Salman Khan']
2,025
arXiv.org
4
32
['Computer Science']
2,503.10625
LHM: Large Animatable Human Reconstruction Model from a Single Image in Seconds
['Lingteng Qiu', 'Xiaodong Gu', 'Peihao Li', 'Qi Zuo', 'Weichao Shen', 'Junfei Zhang', 'Kejie Qiu', 'Weihao Yuan', 'Guanying Chen', 'Zilong Dong', 'Liefeng Bo']
['cs.CV', 'cs.AI']
Animatable 3D human reconstruction from a single image is a challenging problem due to the ambiguity in decoupling geometry, appearance, and deformation. Recent advances in 3D human reconstruction mainly focus on static human modeling, and the reliance of using synthetic 3D scans for training limits their generalizatio...
2025-03-13T17:59:21Z
Project Page: https://lingtengqiu.github.io/LHM/
null
null
null
null
null
null
null
null
null
2,503.10636
The Curse of Conditions: Analyzing and Improving Optimal Transport for Conditional Flow-Based Generation
['Ho Kei Cheng', 'Alexander Schwing']
['cs.LG', 'cs.CV']
Minibatch optimal transport coupling straightens paths in unconditional flow matching. This leads to computationally less demanding inference as fewer integration steps and less complex numerical solvers can be employed when numerically solving an ordinary differential equation at test time. However, in the conditional...
2025-03-13T17:59:56Z
Project page: https://hkchengrex.github.io/C2OT
null
null
null
null
null
null
null
null
null
2,503.10684
Open-World Skill Discovery from Unsegmented Demonstrations
['Jingwen Deng', 'Zihao Wang', 'Shaofei Cai', 'Anji Liu', 'Yitao Liang']
['cs.CV', 'cs.AI']
Learning skills in open-world environments is essential for developing agents capable of handling a variety of tasks by combining basic skills. Online demonstration videos are typically long but unsegmented, making them difficult to segment and label with skill identifiers. Unlike existing methods that rely on sequence...
2025-03-11T18:51:40Z
null
null
null
Open-World Skill Discovery from Unsegmented Demonstrations
['Jingwen Deng', 'Zihao Wang', 'Shaofei Cai', 'Anji Liu', 'Yitao Liang']
2,025
arXiv.org
1
52
['Computer Science']
2,503.10745
Unifying 2D and 3D Vision-Language Understanding
['Ayush Jain', 'Alexander Swerdlow', 'Yuzhou Wang', 'Sergio Arnaud', 'Ada Martin', 'Alexander Sax', 'Franziska Meier', 'Katerina Fragkiadaki']
['cs.CV', 'cs.AI', 'cs.RO']
Progress in 3D vision-language learning has been hindered by the scarcity of large-scale 3D datasets. We introduce UniVLG, a unified architecture for 2D and 3D vision-language understanding that bridges the gap between existing 2D-centric models and the rich 3D sensory data available in embodied systems. Our approach i...
2025-03-13T17:56:22Z
The first two authors contributed equally
null
null
null
null
null
null
null
null
null
2,503.10905
Learning to Inference Adaptively for Multimodal Large Language Models
['Zhuoyan Xu', 'Khoi Duc Nguyen', 'Preeti Mukherjee', 'Saurabh Bagchi', 'Somali Chaterji', 'Yingyu Liang', 'Yin Li']
['cs.AI', 'cs.CV', 'cs.LG']
Multimodal Large Language Models (MLLMs) have shown impressive capabilities in reasoning, yet come with substantial computational cost, limiting their deployment in resource-constrained settings. Despite recent efforts on improving the efficiency of MLLMs, prior solutions fall short in responding to varying runtime con...
2025-03-13T21:39:38Z
null
null
null
null
null
null
null
null
null
null
2,503.10944
Phishsense-1B: A Technical Perspective on an AI-Powered Phishing Detection Model
['SE Blake']
['cs.CR', 'cs.LG']
Phishing is a persistent cybersecurity threat in today's digital landscape. This paper introduces Phishsense-1B, a refined version of the Llama-Guard-3-1B model, specifically tailored for phishing detection and reasoning. This adaptation utilizes Low-Rank Adaptation (LoRA) and the GuardReasoner finetuning methodology. ...
2025-03-13T23:03:09Z
Phishing Detection Model https://huggingface.co/AcuteShrewdSecurity/Llama-Phishsense-1B
null
null
null
null
null
null
null
null
null
2,503.1097
TxAgent: An AI Agent for Therapeutic Reasoning Across a Universe of Tools
['Shanghua Gao', 'Richard Zhu', 'Zhenglun Kong', 'Ayush Noori', 'Xiaorui Su', 'Curtis Ginder', 'Theodoros Tsiligkaridis', 'Marinka Zitnik']
['cs.AI', 'cs.LG']
Precision therapeutics require multimodal adaptive models that generate personalized treatment recommendations. We introduce TxAgent, an AI agent that leverages multi-step reasoning and real-time biomedical knowledge retrieval across a toolbox of 211 tools to analyze drug interactions, contraindications, and patient-sp...
2025-03-14T00:28:15Z
Project page: https://zitniklab.hms.harvard.edu/TxAgent TxAgent code: https://github.com/mims-harvard/TxAgent ToolUniverse code: https://github.com/mims-harvard/ToolUniverse
null
null
TxAgent: An AI Agent for Therapeutic Reasoning Across a Universe of Tools
['Shanghua Gao', 'Richard Zhu', 'Zhenglun Kong', 'Ayush Noori', 'Xiaorui Su', 'Curtis R Ginder', 'Theodoros Tsiligkaridis', 'Marinka Zitnik']
2,025
arXiv.org
8
0
['Computer Science']
2,503.10995
TigerLLM - A Family of Bangla Large Language Models
['Nishat Raihan', 'Marcos Zampieri']
['cs.CL']
The development of Large Language Models (LLMs) remains heavily skewed towards English and a few other high-resource languages. This linguistic disparity is particularly evident for Bangla - the 5th most spoken language. A few initiatives attempted to create open-source Bangla LLMs with performance still behind high-re...
2025-03-14T01:41:16Z
null
null
null
TigerLLM - A Family of Bangla Large Language Models
['Nishat Raihan', 'Marcos Zampieri']
2,025
arXiv.org
0
25
['Computer Science']
2,503.11073
Perceive, Understand and Restore: Real-World Image Super-Resolution with Autoregressive Multimodal Generative Models
['Hongyang Wei', 'Shuaizheng Liu', 'Chun Yuan', 'Lei Zhang']
['cs.CV']
By leveraging the generative priors from pre-trained text-to-image diffusion models, significant progress has been made in real-world image super-resolution (Real-ISR). However, these methods tend to generate inaccurate and unnatural reconstructions in complex and/or heavily degraded scenes, primarily due to their limi...
2025-03-14T04:33:59Z
null
null
null
null
null
null
null
null
null
null
2,503.11129
Direction-Aware Diagonal Autoregressive Image Generation
['Yijia Xu', 'Jianzhong Ju', 'Jian Luan', 'Jinshi Cui']
['cs.CV', 'cs.AI']
The raster-ordered image token sequence exhibits a significant Euclidean distance between index-adjacent tokens at line breaks, making it unsuitable for autoregressive generation. To address this issue, this paper proposes Direction-Aware Diagonal Autoregressive Image Generation (DAR) method, which generates image toke...
2025-03-14T06:44:01Z
null
null
null
null
null
null
null
null
null
null
2,503.1117
DeskVision: Large Scale Desktop Region Captioning for Advanced GUI Agents
['Yibin Xu', 'Liang Yang', 'Hao Chen', 'Hua Wang', 'Zhi Chen', 'Yaohua Tang']
['cs.CL']
The limitation of graphical user interface (GUI) data has been a significant barrier to the development of GUI agents today, especially for the desktop / computer use scenarios. To address this, we propose an automated GUI data generation pipeline, AutoCaptioner, which generates data with rich descriptions while minimi...
2025-03-14T08:16:02Z
null
null
null
null
null
null
null
null
null
null
2,503.11197
Reinforcement Learning Outperforms Supervised Fine-Tuning: A Case Study on Audio Question Answering
['Gang Li', 'Jizhong Liu', 'Heinrich Dinkel', 'Yadong Niu', 'Junbo Zhang', 'Jian Luan']
['cs.SD', 'cs.AI', 'cs.CL', 'eess.AS']
Recently, reinforcement learning (RL) has been shown to greatly enhance the reasoning capabilities of large language models (LLMs), and RL-based approaches have been progressively applied to visual multimodal tasks. However, the audio modality has largely been overlooked in these developments. Thus, we conduct a series...
2025-03-14T08:43:53Z
null
null
null
null
null
null
null
null
null
null
2,503.11221
Toward Generalized Image Quality Assessment: Relaxing the Perfect Reference Quality Assumption
['Du Chen', 'Tianhe Wu', 'Kede Ma', 'Lei Zhang']
['cs.CV']
Full-reference image quality assessment (FR-IQA) generally assumes that reference images are of perfect quality. However, this assumption is flawed due to the sensor and optical limitations of modern imaging systems. Moreover, recent generative enhancement methods are capable of producing images of higher quality than ...
2025-03-14T09:12:03Z
Accepted by CVPR 2025
null
null
null
null
null
null
null
null
null
2,503.11251
Step-Video-TI2V Technical Report: A State-of-the-Art Text-Driven Image-to-Video Generation Model
['Haoyang Huang', 'Guoqing Ma', 'Nan Duan', 'Xing Chen', 'Changyi Wan', 'Ranchen Ming', 'Tianyu Wang', 'Bo Wang', 'Zhiying Lu', 'Aojie Li', 'Xianfang Zeng', 'Xinhao Zhang', 'Gang Yu', 'Yuhe Yin', 'Qiling Wu', 'Wen Sun', 'Kang An', 'Xin Han', 'Deshan Sun', 'Wei Ji', 'Bizhu Huang', 'Brian Li', 'Chenfei Wu', 'Guanzhe Huan...
['cs.CV', 'cs.CL']
We present Step-Video-TI2V, a state-of-the-art text-driven image-to-video generation model with 30B parameters, capable of generating videos up to 102 frames based on both text and image inputs. We build Step-Video-TI2V-Eval as a new benchmark for the text-driven image-to-video task and compare Step-Video-TI2V with ope...
2025-03-14T10:01:55Z
7 pages
null
null
null
null
null
null
null
null
null
2,503.11299
BriLLM: Brain-inspired Large Language Model
['Hai Zhao', 'Hongqiu Wu', 'Dongjie Yang', 'Anni Zou', 'Jiale Hong']
['cs.CL', 'cs.AI']
This paper reports the first brain-inspired large language model (BriLLM). This is a non-Transformer, non-GPT, non-traditional machine learning input-output controlled generative language model. The model is based on the Signal Fully-connected flowing (SiFu) definition on the directed graph in terms of the neural netwo...
2025-03-14T11:08:30Z
null
null
null
null
null
null
null
null
null
null
2,503.11341
Self-Supervised Pretraining for Fine-Grained Plankton Recognition
['Joona Kareinen', 'Tuomas Eerola', 'Kaisa Kraft', 'Lasse Lensu', 'Sanna Suikkanen', 'Heikki Kälviäinen']
['cs.CV']
Plankton recognition is an important computer vision problem due to plankton's essential role in ocean food webs and carbon capture, highlighting the need for species-level monitoring. However, this task is challenging due to its fine-grained nature and dataset shifts caused by different imaging instruments and varying...
2025-03-14T12:15:20Z
CVPR 2025, FGVC12 workshop paper
null
null
null
null
null
null
null
null
null
2,503.11509
TikZero: Zero-Shot Text-Guided Graphics Program Synthesis
['Jonas Belouadi', 'Eddy Ilg', 'Margret Keuper', 'Hideki Tanaka', 'Masao Utiyama', 'Raj Dabre', 'Steffen Eger', 'Simone Paolo Ponzetto']
['cs.CL', 'cs.CV']
With the rise of generative AI, synthesizing figures from text captions becomes a compelling application. However, achieving high geometric precision and editability requires representing figures as graphics programs in languages like TikZ, and aligned training data (i.e., graphics programs with captions) remains scarc...
2025-03-14T15:29:58Z
Project page: https://github.com/potamides/DeTikZify
null
null
TikZero: Zero-Shot Text-Guided Graphics Program Synthesis
['Jonas Belouadi', 'Eddy Ilg', 'Margret Keuper', 'Hideki Tanaka', 'Masao Utiyama', 'Raj Dabre', 'Steffen Eger', 'Simone Paolo Ponzetto']
2,025
arXiv.org
0
85
['Computer Science']
2,503.11576
SmolDocling: An ultra-compact vision-language model for end-to-end multi-modal document conversion
['Ahmed Nassar', 'Andres Marafioti', 'Matteo Omenetti', 'Maksym Lysak', 'Nikolaos Livathinos', 'Christoph Auer', 'Lucas Morin', 'Rafael Teixeira de Lima', 'Yusik Kim', 'A. Said Gurbuz', 'Michele Dolfi', 'Miquel Farré', 'Peter W. J. Staar']
['cs.CV']
We introduce SmolDocling, an ultra-compact vision-language model targeting end-to-end document conversion. Our model comprehensively processes entire pages by generating DocTags, a new universal markup format that captures all page elements in their full context with location. Unlike existing approaches that rely on la...
2025-03-14T16:44:14Z
24 pages, 10 figures
null
null
null
null
null
null
null
null
null
2,503.11579
Vamba: Understanding Hour-Long Videos with Hybrid Mamba-Transformers
['Weiming Ren', 'Wentao Ma', 'Huan Yang', 'Cong Wei', 'Ge Zhang', 'Wenhu Chen']
['cs.CV']
State-of-the-art transformer-based large multimodal models (LMMs) struggle to handle hour-long video inputs due to the quadratic complexity of the causal self-attention operations, leading to high computational costs during training and inference. Existing token compression-based methods reduce the number of video toke...
2025-03-14T16:45:23Z
ICCV 2025 Camera Ready Version. Project Page: https://tiger-ai-lab.github.io/Vamba/
null
null
Vamba: Understanding Hour-Long Videos with Hybrid Mamba-Transformers
['Weiming Ren', 'Wentao Ma', 'Huan Yang', 'Cong Wei', 'Ge Zhang', 'Wenhu Chen']
2,025
arXiv.org
5
75
['Computer Science']
2,503.11591
Pathology Image Compression with Pre-trained Autoencoders
['Srikar Yellapragada', 'Alexandros Graikos', 'Kostas Triaridis', 'Zilinghan Li', 'Tarak Nath Nandi', 'Ravi K Madduri', 'Prateek Prasanna', 'Joel Saltz', 'Dimitris Samaras']
['eess.IV', 'cs.CV']
The growing volume of high-resolution Whole Slide Images in digital histopathology poses significant storage, transmission, and computational efficiency challenges. Standard compression methods, such as JPEG, reduce file sizes but often fail to preserve fine-grained phenotypic details critical for downstream tasks. In ...
2025-03-14T17:01:17Z
null
null
null
Pathology Image Compression with Pre-trained Autoencoders
['Srikar Yellapragada', 'Alexandros Graikos', 'Kostas Triaridis', 'Zilinghan Li', 'T. Nandi', 'Ravi K. Madduri', 'Prateek Prasanna', 'J. Saltz', 'Dimitris Samaras']
2,025
arXiv.org
0
28
['Computer Science', 'Engineering']
2,503.11651
VGGT: Visual Geometry Grounded Transformer
['Jianyuan Wang', 'Minghao Chen', 'Nikita Karaev', 'Andrea Vedaldi', 'Christian Rupprecht', 'David Novotny']
['cs.CV']
We present VGGT, a feed-forward neural network that directly infers all key 3D attributes of a scene, including camera parameters, point maps, depth maps, and 3D point tracks, from one, a few, or hundreds of its views. This approach is a step forward in 3D computer vision, where models have typically been constrained t...
2025-03-14T17:59:47Z
CVPR 2025, Project Page: https://vgg-t.github.io/
null
null
VGGT: Visual Geometry Grounded Transformer
['Jianyuan Wang', 'Minghao Chen', 'Nikita Karaev', 'Andrea Vedaldi', 'Christian Rupprecht', 'David Novotný']
2,025
Computer Vision and Pattern Recognition
38
151
['Computer Science']
2,503.11849
Towards a Unified Copernicus Foundation Model for Earth Vision
['Yi Wang', 'Zhitong Xiong', 'Chenying Liu', 'Adam J. Stewart', 'Thomas Dujardin', 'Nikolaos Ioannis Bountos', 'Angelos Zavras', 'Franziska Gerken', 'Ioannis Papoutsis', 'Laura Leal-Taixé', 'Xiao Xiang Zhu']
['cs.CV']
Advances in Earth observation (EO) foundation models have unlocked the potential of big satellite data to learn generic representations from space, benefiting a wide range of downstream applications crucial to our planet. However, most existing efforts remain limited to fixed spectral sensors, focus solely on the Earth...
2025-03-14T20:16:48Z
31 pages, 32 figures
null
null
null
null
null
null
null
null
null
2,503.12127
Hyperbolic Safety-Aware Vision-Language Models
['Tobia Poppi', 'Tejaswi Kasarla', 'Pascal Mettes', 'Lorenzo Baraldi', 'Rita Cucchiara']
['cs.CV', 'cs.AI', 'cs.CL', 'cs.MM']
Addressing the retrieval of unsafe content from vision-language models such as CLIP is an important step towards real-world integration. Current efforts have relied on unlearning techniques that try to erase the model's knowledge of unsafe concepts. While effective in reducing unwanted outputs, unlearning limits the mo...
2025-03-15T13:18:04Z
CVPR 2025
null
null
null
null
null
null
null
null
null
2,503.12167
PLM: Efficient Peripheral Language Models Hardware-Co-Designed for Ubiquitous Computing
['Cheng Deng', 'Luoyang Sun', 'Jiwen Jiang', 'Yongcheng Zeng', 'Xinjian Wu', 'Wenxin Zhao', 'Qingfa Xiao', 'Jiachuan Wang', 'Haoyang Li', 'Lei Chen', 'Lionel M. Ni', 'Haifeng Zhang', 'Jun Wang']
['cs.CL', 'I.2.7']
While scaling laws have been continuously validated in large language models (LLMs) with increasing model parameters, the inherent tension between the inference demands of LLMs and the limited resources of edge devices poses a critical challenge to the development of edge intelligence. Recently, numerous small language...
2025-03-15T15:11:17Z
null
null
null
null
null
null
null
null
null
null
2,503.12294
The Lucie-7B LLM and the Lucie Training Dataset: Open resources for multilingual language generation
['Olivier Gouvert', 'Julie Hunter', 'Jérôme Louradour', 'Christophe Cerisara', 'Evan Dufraisse', 'Yaya Sy', 'Laura Rivière', 'Jean-Pierre Lorré', 'OpenLLM-France community']
['cs.CL', 'cs.AI']
We present both the Lucie Training Dataset and the Lucie-7B foundation model. The Lucie Training Dataset is a multilingual collection of textual corpora centered around French and designed to offset anglo-centric biases found in many datasets for large language model pretraining. Its French data is pulled not only from...
2025-03-15T23:20:45Z
null
null
null
The Lucie-7B LLM and the Lucie Training Dataset: Open resources for multilingual language generation
['Olivier Gouvert', 'Julie Hunter', 'Jérôme Louradour', 'Christophe Cerisara', 'Evan Dufraisse', 'Yaya Sy', 'Laura Riviere', "Jean-Pierre Lorr'e", 'OpenLLM-France community']
2,025
arXiv.org
0
50
['Computer Science']
2,503.1244
HKCanto-Eval: A Benchmark for Evaluating Cantonese Language Understanding and Cultural Comprehension in LLMs
['Tsz Chung Cheng', 'Chung Shing Cheng', 'Chaak Ming Lau', 'Eugene Tin-Ho Lam', 'Chun Yat Wong', 'Hoi On Yu', 'Cheuk Hei Chong']
['cs.CL']
The ability of language models to comprehend and interact in diverse linguistic and cultural landscapes is crucial. The Cantonese language used in Hong Kong presents unique challenges for natural language processing due to its rich cultural nuances and lack of dedicated evaluation datasets. The HKCanto-Eval benchmark a...
2025-03-16T10:26:24Z
null
null
null
HKCanto-Eval: A Benchmark for Evaluating Cantonese Language Understanding and Cultural Comprehension in LLMs
['Tsz Chung Cheng', 'Chung Shing Cheng', 'Chaak-ming Lau', 'Eugene Tin-Ho Lam', 'Chun Yat Wong', 'Hoi On Yu', 'Cheuk Hei Chong']
2,025
arXiv.org
2
58
['Computer Science']
2,503.12446
BREEN: Bridge Data-Efficient Encoder-Free Multimodal Learning with Learnable Queries
['Tianle Li', 'Yongming Rao', 'Winston Hu', 'Yu Cheng']
['cs.CV', 'cs.AI']
Encoder-free multimodal large language models(MLLMs) eliminate the need for a well-trained vision encoder by directly processing image tokens before the language model. While this approach reduces computational overhead and model complexity, it often requires large amounts of training data to effectively capture the vi...
2025-03-16T10:43:14Z
null
null
null
BREEN: Bridge Data-Efficient Encoder-Free Multimodal Learning with Learnable Queries
['Tianle Li', 'Yongming Rao', 'Winston Hu', 'Yu Cheng']
2,025
arXiv.org
0
41
['Computer Science']
2,503.12507
Segment Any-Quality Images with Generative Latent Space Enhancement
['Guangqian Guo', 'Yong Guo', 'Xuehui Yu', 'Wenbo Li', 'Yaoxing Wang', 'Shan Gao']
['cs.CV']
Despite their success, Segment Anything Models (SAMs) experience significant performance drops on severely degraded, low-quality images, limiting their effectiveness in real-world scenarios. To address this, we propose GleSAM, which utilizes Generative Latent space Enhancement to boost robustness on low-quality images,...
2025-03-16T13:58:13Z
Accepted by CVPR2025
null
null
null
null
null
null
null
null
null
2,503.12524
EXAONE Deep: Reasoning Enhanced Language Models
['LG AI Research', 'Kyunghoon Bae', 'Eunbi Choi', 'Kibong Choi', 'Stanley Jungkyu Choi', 'Yemuk Choi', 'Seokhee Hong', 'Junwon Hwang', 'Hyojin Jeon', 'Kijeong Jeon', 'Gerrard Jeongwon Jo', 'Hyunjik Jo', 'Jiyeon Jung', 'Hyosang Kim', 'Joonkee Kim', 'Seonghwan Kim', 'Soyeon Kim', 'Sunkyoung Kim', 'Yireun Kim', 'Yongil Ki...
['cs.CL', 'cs.AI']
We present EXAONE Deep series, which exhibits superior capabilities in various reasoning tasks, including math and coding benchmarks. We train our models mainly on the reasoning-specialized dataset that incorporates long streams of thought processes. Evaluation results show that our smaller models, EXAONE Deep 2.4B and...
2025-03-16T14:39:33Z
arXiv admin note: substantial text overlap with arXiv:2412.04862, arXiv:2408.03541
null
null
null
null
null
null
null
null
null
2,503.12532
STEVE: A Step Verification Pipeline for Computer-use Agent Training
['Fanbin Lu', 'Zhisheng Zhong', 'Ziqin Wei', 'Shu Liu', 'Chi-Wing Fu', 'Jiaya Jia']
['cs.CV', 'cs.AI']
Developing AI agents to autonomously manipulate graphical user interfaces is a long challenging task. Recent advances in data scaling law inspire us to train computer-use agents with a scaled instruction set, yet using behavior cloning to train agents still requires immense high-quality trajectories. To meet the scalab...
2025-03-16T14:53:43Z
null
null
null
STEVE: A Step Verification Pipeline for Computer-use Agent Training
['Fanbin Lu', 'Zhisheng Zhong', 'Ziqin Wei', 'Shu Liu', 'Chi-Wing Fu', 'Jiaya Jia']
2,025
arXiv.org
0
36
['Computer Science']
2,503.12553
Niagara: Normal-Integrated Geometric Affine Field for Scene Reconstruction from a Single View
['Xianzu Wu', 'Zhenxin Ai', 'Harry Yang', 'Ser-Nam Lim', 'Jun Liu', 'Huan Wang']
['cs.GR', 'cs.CV']
Recent advances in single-view 3D scene reconstruction have highlighted the challenges in capturing fine geometric details and ensuring structural consistency, particularly in high-fidelity outdoor scene modeling. This paper presents Niagara, a new single-view 3D scene reconstruction framework that can faithfully recon...
2025-03-16T15:50:18Z
null
null
null
null
null
null
null
null
null
null
2,503.12649
FW-Merging: Scaling Model Merging with Frank-Wolfe Optimization
['Hao Mark Chen', 'Shell Xu Hu', 'Wayne Luk', 'Timothy Hospedales', 'Hongxiang Fan']
['cs.LG', 'cs.AI']
Model merging has emerged as a promising approach for multi-task learning (MTL), offering a data-efficient alternative to conventional fine-tuning. However, with the rapid development of the open-source AI ecosystem and the increasing availability of fine-tuned foundation models, existing model merging methods face two...
2025-03-16T21:07:05Z
null
null
null
FW-Merging: Scaling Model Merging with Frank-Wolfe Optimization
['Hao Chen', 'S. Hu', 'Wayne Luk', 'Timothy M. Hospedales', 'Hongxiang Fan']
2,025
arXiv.org
1
67
['Computer Science']
2,503.1272
Towards Open-World Generation of Stereo Images and Unsupervised Matching
['Feng Qiao', 'Zhexiao Xiong', 'Eric Xing', 'Nathan Jacobs']
['cs.CV']
Stereo images are fundamental to numerous applications, including extended reality (XR) devices, autonomous driving, and robotics. Unfortunately, acquiring high-quality stereo images remains challenging due to the precise calibration requirements of dual-camera setups and the complexity of obtaining accurate, dense dis...
2025-03-17T01:19:28Z
Accepted by ICCV 2025
null
null
GenStereo: Towards Open-World Generation of Stereo Images and Unsupervised Matching
['Feng Qiao', 'Feng Qiao', 'Zhexiao Xiong', 'Eric Xing', 'Nathan Jacobs']
2,025
arXiv.org
1
65
['Computer Science']
2,503.12769
ViSpeak: Visual Instruction Feedback in Streaming Videos
['Shenghao Fu', 'Qize Yang', 'Yuan-Ming Li', 'Yi-Xing Peng', 'Kun-Yu Lin', 'Xihan Wei', 'Jian-Fang Hu', 'Xiaohua Xie', 'Wei-Shi Zheng']
['cs.CV']
Recent advances in Large Multi-modal Models (LMMs) are primarily focused on offline video understanding. Instead, streaming video understanding poses great challenges to recent models due to its time-sensitive, omni-modal and interactive characteristics. In this work, we aim to extend the streaming video understanding ...
2025-03-17T03:05:31Z
null
null
null
ViSpeak: Visual Instruction Feedback in Streaming Videos
['Shenghao Fu', 'Qize Yang', 'Yuan-Ming Li', 'Yi-Xing Peng', 'Kun-Yu Lin', 'Xihan Wei', 'Jianfang Hu', 'Xiaohua Xie', 'Wei-Shi Zheng']
2,025
arXiv.org
1
69
['Computer Science']
2,503.12797
DeepPerception: Advancing R1-like Cognitive Visual Perception in MLLMs for Knowledge-Intensive Visual Grounding
['Xinyu Ma', 'Ziyang Ding', 'Zhicong Luo', 'Chi Chen', 'Zonghao Guo', 'Derek F. Wong', 'Xiaoyi Feng', 'Maosong Sun']
['cs.CV', 'cs.AI', 'cs.CL']
Human experts excel at fine-grained visual discrimination by leveraging domain knowledge to refine perceptual features, a capability that remains underdeveloped in current Multimodal Large Language Models (MLLMs). Despite possessing vast expert-level knowledge, MLLMs struggle to integrate reasoning into visual percepti...
2025-03-17T04:06:34Z
null
null
null
null
null
null
null
null
null
null
2,503.12937
R1-VL: Learning to Reason with Multimodal Large Language Models via Step-wise Group Relative Policy Optimization
['Jingyi Zhang', 'Jiaxing Huang', 'Huanjin Yao', 'Shunyu Liu', 'Xikun Zhang', 'Shijian Lu', 'Dacheng Tao']
['cs.AI', 'cs.CL', 'cs.CV', 'cs.LG']
Recent studies generally enhance MLLMs' reasoning capabilities via supervised fine-tuning on high-quality chain-of-thought reasoning data, which often leads models to merely imitate successful reasoning paths without understanding what the wrong reasoning paths are. In this work, we aim to enhance the MLLMs' reasoning ...
2025-03-17T08:51:44Z
null
null
null
null
null
null
null
null
null
null
2,503.12963
Unlock Pose Diversity: Accurate and Efficient Implicit Keypoint-based Spatiotemporal Diffusion for Audio-driven Talking Portrait
['Chaolong Yang', 'Kai Yao', 'Yuyao Yan', 'Chenru Jiang', 'Weiguang Zhao', 'Jie Sun', 'Guangliang Cheng', 'Yifei Zhang', 'Bin Dong', 'Kaizhu Huang']
['cs.CV']
Audio-driven single-image talking portrait generation plays a crucial role in virtual reality, digital human creation, and filmmaking. Existing approaches are generally categorized into keypoint-based and image-based methods. Keypoint-based methods effectively preserve character identity but struggle to capture fine fa...
2025-03-17T09:18:31Z
null
null
null
Unlock Pose Diversity: Accurate and Efficient Implicit Keypoint-based Spatiotemporal Diffusion for Audio-driven Talking Portrait
['Chaolong Yang', 'Kai Yao', 'Yuyao Yan', 'Chenru Jiang', 'Weiguang Zhao', 'Jie Sun', 'Guangliang Cheng', 'Yifei Zhang', 'Bin Dong', 'Kaizhu Huang']
2,025
arXiv.org
0
39
['Computer Science']
2,503.13026
HiMTok: Learning Hierarchical Mask Tokens for Image Segmentation with Large Multimodal Model
['Tao Wang', 'Changxu Cheng', 'Lingfeng Wang', 'Senda Chen', 'Wuyue Zhao']
['cs.CV']
The remarkable performance of large multimodal models (LMMs) has attracted significant interest from the image segmentation community. To align with the next-token-prediction paradigm, current LMM-driven segmentation methods either use object boundary points to represent masks or introduce special segmentation tokens, ...
2025-03-17T10:29:08Z
Accepted by ICCV 2025; the code is at https://github.com/yayafengzi/LMM-HiMTok
null
null
null
null
null
null
null
null
null
2,503.1306
Historic Scripts to Modern Vision: A Novel Dataset and A VLM Framework for Transliteration of Modi Script to Devanagari
['Harshal Kausadikar', 'Tanvi Kale', 'Onkar Susladkar', 'Sparsh Mittal']
['cs.CV']
In medieval India, the Marathi language was written using the Modi script. The texts written in Modi script include extensive knowledge about medieval sciences, medicines, land records and authentic evidence about Indian history. Around 40 million documents are in poor condition and have not yet been transliterated. Fu...
2025-03-17T11:07:29Z
Under submission at a conference
null
null
Historic Scripts to Modern Vision: A Novel Dataset and A VLM Framework for Transliteration of Modi Script to Devanagari
['Harshal Kausadikar', 'Tanvi Kale', 'Onkar Susladkar', 'Sparsh Mittal']
2,025
arXiv.org
0
36
['Computer Science']
2,503.1326
Don't Judge Before You CLIP: A Unified Approach for Perceptual Tasks
['Amit Zalcher', 'Navve Wasserman', 'Roman Beliy', 'Oliver Heinimann', 'Michal Irani']
['cs.CV']
Visual perceptual tasks aim to predict human judgment of images (e.g., emotions invoked by images, image quality assessment). Unlike objective tasks such as object/scene recognition, perceptual tasks rely on subjective human assessments, making its data-labeling difficult. The scarcity of such human-annotated data resu...
2025-03-17T15:15:31Z
null
null
null
Don't Judge Before You CLIP: A Unified Approach for Perceptual Tasks
['Amit Zalcher', 'Navve Wasserman', 'Roman Beliy', 'Oliver Heinimann', 'Michal Irani']
2,025
arXiv.org
0
58
['Computer Science']
2,503.13265
FlexWorld: Progressively Expanding 3D Scenes for Flexiable-View Synthesis
['Luxi Chen', 'Zihan Zhou', 'Min Zhao', 'Yikai Wang', 'Ge Zhang', 'Wenhao Huang', 'Hao Sun', 'Ji-Rong Wen', 'Chongxuan Li']
['cs.CV']
Generating flexible-view 3D scenes, including 360{\deg} rotation and zooming, from single images is challenging due to a lack of 3D data. To this end, we introduce FlexWorld, a novel framework consisting of two key components: (1) a strong video-to-video (V2V) diffusion model to generate high-quality novel view images ...
2025-03-17T15:18:38Z
null
null
null
FlexWorld: Progressively Expanding 3D Scenes for Flexiable-View Synthesis
['Luxi Chen', 'Zihan Zhou', 'Min Zhao', 'Yikai Wang', 'Ge Zhang', 'Wenhao Huang', 'Hao Sun', 'Ji-Rong Wen', 'Chongxuan Li']
2,025
arXiv.org
4
72
['Computer Science']
2,503.13327
Edit Transfer: Learning Image Editing via Vision In-Context Relations
['Lan Chen', 'Qi Mao', 'Yuchao Gu', 'Mike Zheng Shou']
['cs.CV']
We introduce a new setting, Edit Transfer, where a model learns a transformation from just a single source-target example and applies it to a new query image. While text-based methods excel at semantic manipulations through textual prompts, they often struggle with precise geometric details (e.g., poses and viewpoint c...
2025-03-17T16:04:44Z
null
null
null
null
null
null
null
null
null
null
2,503.1336
Mitigating Visual Forgetting via Take-along Visual Conditioning for Multi-modal Long CoT Reasoning
['Hai-Long Sun', 'Zhun Sun', 'Houwen Peng', 'Han-Jia Ye']
['cs.CV', 'cs.AI', 'cs.LG']
Recent advancements in Large Language Models (LLMs) have demonstrated enhanced reasoning capabilities, evolving from Chain-of-Thought (CoT) prompting to advanced, product-oriented solutions like OpenAI o1. During our re-implementation of this model, we noticed that in multimodal tasks requiring visual input (e.g., geom...
2025-03-17T16:45:12Z
Accepted to ACL 2025. The project page is available at https://sun-hailong.github.io/projects/TVC
null
null
Mitigating Visual Forgetting via Take-along Visual Conditioning for Multi-modal Long CoT Reasoning
['Hai-Long Sun', 'Zhun Sun', 'Houwen Peng', 'Han-Jia Ye']
2,025
arXiv.org
6
49
['Computer Science']
2,503.13383
Cream of the Crop: Harvesting Rich, Scalable and Transferable Multi-Modal Data for Instruction Fine-Tuning
['Mengyao Lyu', 'Yan Li', 'Huasong Zhong', 'Wenhao Yang', 'Hui Chen', 'Jungong Han', 'Guiguang Ding', 'Zhenheng Yang']
['cs.CV', 'cs.AI', 'cs.CL', 'cs.LG']
The hypothesis that pretrained large language models (LLMs) necessitate only minimal supervision during the fine-tuning (SFT) stage (Zhou et al., 2024) has been substantiated by recent advancements in data curation and selection research. However, their stability and generalizability are compromised due to the vulnerab...
2025-03-17T17:11:22Z
update comparison with sota and analysis
null
null
null
null
null
null
null
null
null
2,503.13423
SuperBPE: Space Travel for Language Models
['Alisa Liu', 'Jonathan Hayase', 'Valentin Hofmann', 'Sewoong Oh', 'Noah A. Smith', 'Yejin Choi']
['cs.CL', 'cs.LG']
The assumption across nearly all language model (LM) tokenization schemes is that tokens should be subwords, i.e., contained within word boundaries. While providing a seemingly reasonable inductive bias, is this common practice limiting the potential of modern LMs? Whitespace is not a reliable delimiter of meaning, as ...
2025-03-17T17:53:23Z
updated related work
null
null
null
null
null
null
null
null
null
2,503.13434
BlobCtrl: A Unified and Flexible Framework for Element-level Image Generation and Editing
['Yaowei Li', 'Lingen Li', 'Zhaoyang Zhang', 'Xiaoyu Li', 'Guangzhi Wang', 'Hongxiang Li', 'Xiaodong Cun', 'Ying Shan', 'Yuexian Zou']
['cs.CV', 'cs.AI', 'cs.MM']
Element-level visual manipulation is essential in digital content creation, but current diffusion-based methods lack the precision and flexibility of traditional tools. In this work, we introduce BlobCtrl, a framework that unifies element-level generation and editing using a probabilistic blob-based representation. By ...
2025-03-17T17:58:05Z
Project Webpage: https://liyaowei-stu.github.io/project/BlobCtrl/
null
null
BlobCtrl: A Unified and Flexible Framework for Element-level Image Generation and Editing
['Yaowei Li', 'Lingen Li', 'Zhaoyang Zhang', 'Xiaoyu Li', 'Guangzhi Wang', 'Hongxiang Li', 'Xiaodong Cun', 'Ying Shan', 'Yuexian Zou']
2,025
arXiv.org
2
49
['Computer Science']
2,503.13439
Amodal3R: Amodal 3D Reconstruction from Occluded 2D Images
['Tianhao Wu', 'Chuanxia Zheng', 'Frank Guan', 'Andrea Vedaldi', 'Tat-Jen Cham']
['cs.CV']
Most image-based 3D object reconstructors assume that objects are fully visible, ignoring occlusions that commonly occur in real-world scenarios. In this paper, we introduce Amodal3R, a conditional 3D generative model designed to reconstruct 3D objects from partial observations. We start from a "foundation" 3D generati...
2025-03-17T17:59:01Z
Project Page: https://sm0kywu.github.io/Amodal3R/
null
null
null
null
null
null
null
null
null
2,503.1344
MaTVLM: Hybrid Mamba-Transformer for Efficient Vision-Language Modeling
['Yingyue Li', 'Bencheng Liao', 'Wenyu Liu', 'Xinggang Wang']
['cs.CV']
With the advancement of RNN models with linear complexity, the quadratic complexity challenge of transformers has the potential to be overcome. Notably, the emerging Mamba-2 has demonstrated competitive performance, bridging the gap between RNN models and transformers. However, due to sequential processing and vanishin...
2025-03-17T17:59:01Z
Code and model are available at http://github.com/hustvl/MaTVLM
null
null
null
null
null
null
null
null
null
2,503.13444
VideoMind: A Chain-of-LoRA Agent for Long Video Reasoning
['Ye Liu', 'Kevin Qinghong Lin', 'Chang Wen Chen', 'Mike Zheng Shou']
['cs.CV', 'cs.AI']
Videos, with their unique temporal dimension, demand precise grounded understanding, where answers are directly linked to visual, interpretable evidence. Despite significant breakthroughs in reasoning capabilities within Large Language Models, multi-modal reasoning - especially for videos - remains unexplored. In this ...
2025-03-17T17:59:33Z
Project Page: https://videomind.github.io/
null
null
VideoMind: A Chain-of-LoRA Agent for Long Video Reasoning
['Ye Liu', 'Kevin Qinghong Lin', 'Chang Wen Chen', 'Mike Zheng Shou']
2,025
arXiv.org
6
103
['Computer Science']
2,503.13661
Pensez: Less Data, Better Reasoning -- Rethinking French LLM
['Huy Hoang Ha']
['cs.CL']
Large language models (LLMs) have demonstrated remarkable capabilities in various natural language processing tasks. However, achieving strong performance in specialized domains like mathematical reasoning and non-English languages often requires extensive training on massive datasets. This paper investigates a contras...
2025-03-17T19:09:11Z
null
null
null
null
null
null
null
null
null
null
2,503.13939
Med-R1: Reinforcement Learning for Generalizable Medical Reasoning in Vision-Language Models
['Yuxiang Lai', 'Jike Zhong', 'Ming Li', 'Shitian Zhao', 'Xiaofeng Yang']
['cs.CV']
Vision-language models (VLMs) have achieved impressive progress in natural image reasoning, yet their potential in medical imaging remains underexplored. Medical vision-language tasks demand precise understanding and clinically coherent answers, which are difficult to achieve due to the complexity of medical data and t...
2025-03-18T06:12:38Z
null
null
null
Med-R1: Reinforcement Learning for Generalizable Medical Reasoning in Vision-Language Models
['Yuxiang Lai', 'Jike Zhong', 'Ming Li', 'Shitian Zhao', 'Xiaofen Yang']
2,025
arXiv.org
31
37
['Computer Science']
2,503.13988
Empowering Smaller Models: Tuning LLaMA and Gemma with Chain-of-Thought for Ukrainian Exam Tasks
['Mykyta Syromiatnikov', 'Victoria Ruvinskaya', 'Nataliia Komleva']
['cs.CL', 'cs.AI']
Leading large language models have demonstrated impressive capabilities in reasoning-intensive tasks, such as standardized educational testing. However, they often require extensive training in low-resource settings with inaccessible infrastructure. Small or compact models, though more efficient, frequently lack suffic...
2025-03-18T07:44:49Z
12 pages, 6 tables, 2 figures
null
null
null
null
null
null
null
null
null
2,503.14002
MeshFleet: Filtered and Annotated 3D Vehicle Dataset for Domain Specific Generative Modeling
['Damian Boborzi', 'Phillip Mueller', 'Jonas Emrich', 'Dominik Schmid', 'Sebastian Mueller', 'Lars Mikelsons']
['cs.CV', 'cs.AI', 'cs.LG']
Generative models have recently made remarkable progress in the field of 3D objects. However, their practical application in fields like engineering remains limited since they fail to deliver the accuracy, quality, and controllability needed for domain-specific tasks. Fine-tuning large generative models is a promising ...
2025-03-18T08:09:24Z
null
null
null
null
null
null
null
null
null
null
2,503.14136
CARE: A QLoRA-Fine Tuned Multi-Domain Chatbot With Fast Learning On Minimal Hardware
['Ankit Dutta', 'Nabarup Ghosh', 'Ankush Chatterjee']
['cs.CL', 'cs.AI']
Large Language models have demonstrated excellent domain-specific question-answering capabilities when finetuned with a particular dataset of that specific domain. However, fine-tuning the models requires a significant amount of training time and a considerable amount of hardware. In this work, we propose CARE (Custome...
2025-03-18T10:58:10Z
null
null
null
null
null
null
null
null
null
null
2,503.14173
NERCat: Fine-Tuning for Enhanced Named Entity Recognition in Catalan
['Guillem Cadevall Ferreres', 'Marc Serrano Sanz', 'Marc Bardeli Gámez', 'Pol Gerdt Basullas', 'Francesc Tarres Ruiz', 'Raul Quijada Ferrero']
['cs.CL', '68T50', 'I.2.7']
Named Entity Recognition (NER) is a critical component of Natural Language Processing (NLP) for extracting structured information from unstructured text. However, for low-resource languages like Catalan, the performance of NER systems often suffers due to the lack of high-quality annotated datasets. This paper introduc...
2025-03-18T11:44:19Z
7 pages, 1 table
null
null
NERCat: Fine-Tuning for Enhanced Named Entity Recognition in Catalan
['Guillem Cadevall Ferreres', 'Marc Serrano Sanz', "Marc Bardeli G'amez", 'Pol Gerdt Basullas', 'Francesc Tarres-Ruiz', 'Raul Quijada Ferrero']
2,025
arXiv.org
0
4
['Computer Science']
2,503.14189
Towards Harmless Multimodal Assistants with Blind Preference Optimization
['Yongqi Li', 'Lu Yang', 'Jian Wang', 'Runyang You', 'Wenjie Li', 'Liqiang Nie']
['cs.CL', 'cs.CV']
Multimodal Large Language Models (MLLMs) have demonstrated impressive capabilities in multimodal understanding, reasoning, and interaction. Given the extensive applications of MLLMs, the associated safety issues have become increasingly critical. Due to the effectiveness of preference optimization in aligning MLLMs wit...
2025-03-18T12:02:38Z
null
null
null
null
null
null
null
null
null
null
2,503.14325
LeanVAE: An Ultra-Efficient Reconstruction VAE for Video Diffusion Models
['Yu Cheng', 'Fajie Yuan']
['cs.CV', 'eess.IV']
Recent advances in Latent Video Diffusion Models (LVDMs) have revolutionized video generation by leveraging Video Variational Autoencoders (Video VAEs) to compress intricate video data into a compact latent space. However, as LVDM training scales, the computational overhead of Video VAEs becomes a critical bottleneck, ...
2025-03-18T14:58:59Z
null
null
null
null
null
null
null
null
null
null