arxiv_id
float64
1.5k
2.51k
title
stringlengths
9
178
authors
stringlengths
2
22.8k
categories
stringlengths
4
146
summary
stringlengths
103
1.92k
published
stringdate
2015-02-06 10:44:00
2025-07-10 17:59:58
comments
stringlengths
2
417
journal_ref
stringclasses
321 values
doi
stringclasses
398 values
ss_title
stringlengths
8
159
ss_authors
stringlengths
11
8.38k
ss_year
float64
2.02k
2.03k
ss_venue
stringclasses
281 values
ss_citationCount
float64
0
134k
ss_referenceCount
float64
0
429
ss_fieldsOfStudy
stringclasses
47 values
2,412.04318
The Hyperfitting Phenomenon: Sharpening and Stabilizing LLMs for Open-Ended Text Generation
['Fredrik Carlsson', 'Fangyu Liu', 'Daniel Ward', 'Murathan Kurfali', 'Joakim Nivre']
['cs.CL', 'cs.AI']
This paper introduces the counter-intuitive generalization results of overfitting pre-trained large language models (LLMs) on very small datasets. In the setting of open-ended text generation, it is well-documented that LLMs tend to generate repetitive and dull sequences, a phenomenon that is especially apparent when g...
2024-12-05T16:34:20Z
Under review at ICLR
null
null
The Hyperfitting Phenomenon: Sharpening and Stabilizing LLMs for Open-Ended Text Generation
['Fredrik Carlsson', 'Fangyu Liu', 'Daniel Ward', 'Murathan Kurfali', 'Joakim Nivre']
2,024
International Conference on Learning Representations
3
27
['Computer Science']
2,412.04332
Liquid: Language Models are Scalable and Unified Multi-modal Generators
['Junfeng Wu', 'Yi Jiang', 'Chuofan Ma', 'Yuliang Liu', 'Hengshuang Zhao', 'Zehuan Yuan', 'Song Bai', 'Xiang Bai']
['cs.CV']
We present Liquid, an auto-regressive generation paradigm that seamlessly integrates visual comprehension and generation by tokenizing images into discrete codes and learning these code embeddings alongside text tokens within a shared feature space for both vision and language. Unlike previous multimodal large language...
2024-12-05T16:48:16Z
Technical report. Project page: https://foundationvision.github.io/Liquid/
null
null
Liquid: Language Models are Scalable and Unified Multi-modal Generators
['Junfeng Wu', 'Yi Jiang', 'Chuofan Ma', 'Yuliang Liu', 'Hengshuang Zhao', 'Zehuan Yuan', 'Song Bai', 'Xiang Bai']
2,024
null
9
83
['Computer Science']
2,412.04418
ACE2-SOM: Coupling an ML atmospheric emulator to a slab ocean and learning the sensitivity of climate to changed CO$_2$
['Spencer K. Clark', 'Oliver Watt-Meyer', 'Anna Kwa', 'Jeremy McGibbon', 'Brian Henn', 'W. Andre Perkins', 'Elynn Wu', 'Lucas M. Harris', 'Christopher S. Bretherton']
['physics.ao-ph']
While autoregressive machine-learning-based emulators have been trained to produce stable and accurate rollouts in the climate of the present-day and recent past, none so far have been trained to emulate the sensitivity of climate to substantial changes in CO$_2$ or other greenhouse gases. As an initial step we couple ...
2024-12-05T18:44:33Z
31 pages, 13 figures
null
null
null
null
null
null
null
null
null
2,412.04431
Infinity: Scaling Bitwise AutoRegressive Modeling for High-Resolution Image Synthesis
['Jian Han', 'Jinlai Liu', 'Yi Jiang', 'Bin Yan', 'Yuqi Zhang', 'Zehuan Yuan', 'Bingyue Peng', 'Xiaobing Liu']
['cs.CV']
We present Infinity, a Bitwise Visual AutoRegressive Modeling capable of generating high-resolution, photorealistic images following language instruction. Infinity redefines visual autoregressive model under a bitwise token prediction framework with an infinite-vocabulary tokenizer & classifier and bitwise self-correct...
2024-12-05T18:53:02Z
17 pages, 14 figures
null
null
null
null
null
null
null
null
null
2,412.04432
Divot: Diffusion Powers Video Tokenizer for Comprehension and Generation
['Yuying Ge', 'Yizhuo Li', 'Yixiao Ge', 'Ying Shan']
['cs.CV']
In recent years, there has been a significant surge of interest in unifying image comprehension and generation within Large Language Models (LLMs). This growing interest has prompted us to explore extending this unification to videos. The core challenge lies in developing a versatile video tokenizer that captures both ...
2024-12-05T18:53:04Z
Project released at: https://github.com/TencentARC/Divot
null
null
Divot: Diffusion Powers Video Tokenizer for Comprehension and Generation
['Yuying Ge', 'Yizhuo Li', 'Yixiao Ge', 'Ying Shan']
2,024
arXiv.org
3
0
['Computer Science']
2,412.04445
Moto: Latent Motion Token as the Bridging Language for Learning Robot Manipulation from Videos
['Yi Chen', 'Yuying Ge', 'Weiliang Tang', 'Yizhuo Li', 'Yixiao Ge', 'Mingyu Ding', 'Ying Shan', 'Xihui Liu']
['cs.RO', 'cs.AI', 'cs.CL', 'cs.CV', 'cs.LG']
Recent developments in Large Language Models pre-trained on extensive corpora have shown significant success in various natural language processing tasks with minimal fine-tuning. This success offers new promise for robotics, which has long been constrained by the high cost of action-labeled data. We ask: given the abu...
2024-12-05T18:57:04Z
Project released at: https://chenyi99.github.io/moto/ Code released at: https://github.com/TencentARC/Moto Update: Added content related to real-world robot experiments and learning from human videos; Modified author information
null
null
Moto: Latent Motion Token as the Bridging Language for Learning Robot Manipulation from Videos
['Yi Chen', 'Yuying Ge', 'Weiliang Tang', 'Yizhuo Li', 'Yixiao Ge', 'Mingyu Ding', 'Ying Shan', 'Xihui Liu']
2,024
null
3
59
['Computer Science']
2,412.04446
DiCoDe: Diffusion-Compressed Deep Tokens for Autoregressive Video Generation with Language Models
['Yizhuo Li', 'Yuying Ge', 'Yixiao Ge', 'Ping Luo', 'Ying Shan']
['cs.CV']
Videos are inherently temporal sequences by their very nature. In this work, we explore the potential of modeling videos in a chronological and scalable manner with autoregressive (AR) language models, inspired by their success in natural language processing. We introduce DiCoDe, a novel approach that leverages Diffusi...
2024-12-05T18:57:06Z
Project Page: https://liyz15.github.io/DiCoDe
null
null
null
null
null
null
null
null
null
2,412.04448
MEMO: Memory-Guided Diffusion for Expressive Talking Video Generation
['Longtao Zheng', 'Yifan Zhang', 'Hanzhong Guo', 'Jiachun Pan', 'Zhenxiong Tan', 'Jiahao Lu', 'Chuanxin Tang', 'Bo An', 'Shuicheng Yan']
['cs.CV']
Recent advances in video diffusion models have unlocked new potential for realistic audio-driven talking video generation. However, achieving seamless audio-lip synchronization, maintaining long-term identity consistency, and producing natural, audio-aligned expressions in generated talking videos remain significant ch...
2024-12-05T18:57:26Z
Project Page: https://memoavatar.github.io
null
null
null
null
null
null
null
null
null
2,412.04449
p-MoD: Building Mixture-of-Depths MLLMs via Progressive Ratio Decay
['Jun Zhang', 'Desen Meng', 'Ji Qi', 'Zhenpeng Huang', 'Tao Wu', 'Limin Wang']
['cs.CV', 'cs.CL']
Despite the remarkable performance of multimodal large language models (MLLMs) across diverse tasks, the substantial training and inference costs impede their advancement. The majority of computation stems from the overwhelming volume of vision tokens processed by the transformer decoder. In this paper, we propose to b...
2024-12-05T18:58:03Z
Technical Report; Code released at https://github.com/MCG-NJU/p-MoD
null
null
p-MoD: Building Mixture-of-Depths MLLMs via Progressive Ratio Decay
['Jun Zhang', 'Desen Meng', 'Ji Qi', 'Zhenpeng Huang', 'Tao Wu', 'Limin Wang']
2,024
arXiv.org
4
0
['Computer Science']
2,412.04455
Code-as-Monitor: Constraint-aware Visual Programming for Reactive and Proactive Robotic Failure Detection
['Enshen Zhou', 'Qi Su', 'Cheng Chi', 'Zhizheng Zhang', 'Zhongyuan Wang', 'Tiejun Huang', 'Lu Sheng', 'He Wang']
['cs.RO', 'cs.AI', 'cs.CV', 'cs.LG']
Automatic detection and prevention of open-set failures are crucial in closed-loop robotic systems. Recent studies often struggle to simultaneously identify unexpected failures reactively after they occur and prevent foreseeable ones proactively. To this end, we propose Code-as-Monitor (CaM), a novel paradigm leveragin...
2024-12-05T18:58:27Z
Accepted by CVPR 2025. Project page: https://zhoues.github.io/Code-as-Monitor/
null
null
Code-as-Monitor: Constraint-aware Visual Programming for Reactive and Proactive Robotic Failure Detection
['Enshen Zhou', 'Qi Su', 'Cheng Chi', 'Zhizheng Zhang', 'Zhongyuan Wang', 'Tiejun Huang', 'Lu Sheng', 'He Wang']
2,024
Computer Vision and Pattern Recognition
8
78
['Computer Science']
2,412.04468
NVILA: Efficient Frontier Visual Language Models
['Zhijian Liu', 'Ligeng Zhu', 'Baifeng Shi', 'Zhuoyang Zhang', 'Yuming Lou', 'Shang Yang', 'Haocheng Xi', 'Shiyi Cao', 'Yuxian Gu', 'Dacheng Li', 'Xiuyu Li', 'Yunhao Fang', 'Yukang Chen', 'Cheng-Yu Hsieh', 'De-An Huang', 'An-Chieh Cheng', 'Vishwesh Nath', 'Jinyi Hu', 'Sifei Liu', 'Ranjay Krishna', 'Daguang Xu', 'Xiaolo...
['cs.CV']
Visual language models (VLMs) have made significant advances in accuracy in recent years. However, their efficiency has received much less attention. This paper introduces NVILA, a family of open VLMs designed to optimize both efficiency and accuracy. Building on top of VILA, we improve its model architecture by first ...
2024-12-05T18:59:55Z
null
null
null
null
null
null
null
null
null
null
2,412.04506
Arctic-Embed 2.0: Multilingual Retrieval Without Compromise
['Puxuan Yu', 'Luke Merrick', 'Gaurav Nuti', 'Daniel Campos']
['cs.CL', 'cs.IR', 'cs.LG']
This paper presents the training methodology of Arctic-Embed 2.0, a set of open-source text embedding models built for accurate and efficient multilingual retrieval. While prior works have suffered from degraded English retrieval quality, Arctic-Embed 2.0 delivers competitive retrieval quality on multilingual and Engli...
2024-12-03T22:59:36Z
10 pages, 5 figures, 3 tables
null
null
Arctic-Embed 2.0: Multilingual Retrieval Without Compromise
['Puxuan Yu', 'Luke Merrick', 'Gaurav Nuti', 'Daniel Campos']
2,024
arXiv.org
15
0
['Computer Science']
2,412.04533
Mask-Adapter: The Devil is in the Masks for Open-Vocabulary Segmentation
['Yongkang Li', 'Tianheng Cheng', 'Bin Feng', 'Wenyu Liu', 'Xinggang Wang']
['cs.CV']
Recent open-vocabulary segmentation methods adopt mask generators to predict segmentation masks and leverage pre-trained vision-language models, e.g., CLIP, to classify these masks via mask pooling. Although these approaches show promising results, it is counterintuitive that accurate masks often fail to yield accurate...
2024-12-05T17:42:37Z
Accepted by CVPR 2025; Code & models: https://github.com/hustvl/MaskAdapter
null
null
null
null
null
null
null
null
null
2,412.04814
LiFT: Leveraging Human Feedback for Text-to-Video Model Alignment
['Yibin Wang', 'Zhiyu Tan', 'Junyan Wang', 'Xiaomeng Yang', 'Cheng Jin', 'Hao Li']
['cs.CV']
Recent advances in text-to-video (T2V) generative models have shown impressive capabilities. However, these models are still inadequate in aligning synthesized videos with human preferences (e.g., accurately reflecting text descriptions), which is particularly difficult to address, as human preferences are subjective a...
2024-12-06T07:16:14Z
Project page: https://codegoat24.github.io/LiFT
null
null
null
null
null
null
null
null
null
2,412.04862
EXAONE 3.5: Series of Large Language Models for Real-world Use Cases
['LG AI Research', 'Soyoung An', 'Kyunghoon Bae', 'Eunbi Choi', 'Kibong Choi', 'Stanley Jungkyu Choi', 'Seokhee Hong', 'Junwon Hwang', 'Hyojin Jeon', 'Gerrard Jeongwon Jo', 'Hyunjik Jo', 'Jiyeon Jung', 'Yountae Jung', 'Hyosang Kim', 'Joonkee Kim', 'Seonghwan Kim', 'Soyeon Kim', 'Sunkyoung Kim', 'Yireun Kim', 'Yongil Ki...
['cs.CL']
This technical report introduces the EXAONE 3.5 instruction-tuned language models, developed and released by LG AI Research. The EXAONE 3.5 language models are offered in three configurations: 32B, 7.8B, and 2.4B. These models feature several standout capabilities: 1) exceptional instruction following capabilities in r...
2024-12-06T08:53:46Z
arXiv admin note: text overlap with arXiv:2408.03541
null
null
EXAONE 3.5: Series of Large Language Models for Real-world Use Cases
['LG AI Research', 'Soyoung An', 'Kyunghoon Bae', 'Eunbi Choi', 'Kibong Choi', 'Stanley Jungkyu Choi', 'Seokhee Hong', 'Junwon Hwang', 'Hyojin Jeon', 'Gerrard Jeongwon Jo', 'Hyunjik Jo', 'Jiyeon Jung', 'Yountae Jung', 'Hyosang Kim', 'Joonkee Kim', 'Seonghwan Kim', 'Soyeon Kim', 'SunKyoung Kim', 'Yireun Kim', 'Yongil Ki...
2,024
arXiv.org
16
0
['Computer Science']
2,412.04871
Building a Family of Data Augmentation Models for Low-cost LLM Fine-tuning on the Cloud
['Yuanhao Yue', 'Chengyu Wang', 'Jun Huang', 'Peng Wang']
['cs.CL']
Specializing LLMs in various domain-specific tasks has emerged as a critical step towards achieving high performance. However, the construction and annotation of datasets in specific domains are always very costly. Apart from using superior and expensive closed-source LLM APIs to construct datasets, some open-source mo...
2024-12-06T09:04:12Z
coling 2025 industry track
null
null
null
null
null
null
null
null
null
2,412.0488
MozzaVID: Mozzarella Volumetric Image Dataset
['Pawel Tomasz Pieta', 'Peter Winkel Rasmussen', 'Anders Bjorholm Dahl', 'Jeppe Revall Frisvad', 'Siavash Arjomand Bigdeli', 'Carsten Gundlach', 'Anders Nymark Christensen']
['cs.CV', 'eess.IV']
Influenced by the complexity of volumetric imaging, there is a shortage of established datasets useful for benchmarking volumetric deep-learning models. As a consequence, new and existing models are not easily comparable, limiting the development of architectures optimized specifically for volumetric data. To counterac...
2024-12-06T09:23:31Z
null
null
null
null
null
null
null
null
null
null
2,412.04905
DEMO: Reframing Dialogue Interaction with Fine-grained Element Modeling
['Minzheng Wang', 'Xinghua Zhang', 'Kun Chen', 'Nan Xu', 'Haiyang Yu', 'Fei Huang', 'Wenji Mao', 'Yongbin Li']
['cs.CL', 'cs.AI', 'cs.LG']
Large language models (LLMs) enabled dialogue systems have become one of the central modes in human-machine interaction, which bring about vast amounts of conversation logs and increasing demand for dialogue generation. The dialogue's life-cycle spans from $\textit{Prelude}$ through $\textit{Interlocution}$ to $\textit...
2024-12-06T10:01:38Z
ACL 2025 Findings. We release the code and data at https://github.com/MozerWang/DEMO
null
null
null
null
null
null
null
null
null
2,412.05237
MAmmoTH-VL: Eliciting Multimodal Reasoning with Instruction Tuning at Scale
['Jarvis Guo', 'Tuney Zheng', 'Yuelin Bai', 'Bo Li', 'Yubo Wang', 'King Zhu', 'Yizhi Li', 'Graham Neubig', 'Wenhu Chen', 'Xiang Yue']
['cs.CL', 'cs.CV']
Open-source multimodal large language models (MLLMs) have shown significant potential in a broad range of multimodal tasks. However, their reasoning capabilities remain constrained by existing instruction-tuning datasets, which were predominately repurposed from academic datasets such as VQA, AI2D, and ChartQA. These d...
2024-12-06T18:14:24Z
ACL 2025 Main
null
null
null
null
null
null
null
null
null
2,412.0527
APOLLO: SGD-like Memory, AdamW-level Performance
['Hanqing Zhu', 'Zhenyu Zhang', 'Wenyan Cong', 'Xi Liu', 'Sem Park', 'Vikas Chandra', 'Bo Long', 'David Z. Pan', 'Zhangyang Wang', 'Jinwon Lee']
['cs.LG', 'cs.AI', 'cs.PF']
Large language models (LLMs) are notoriously memory-intensive during training, particularly with the popular AdamW optimizer. This memory burden necessitates using more or higher-end GPUs or reducing batch sizes, limiting training scalability and throughput. To address this, various memory-efficient optimizers have bee...
2024-12-06T18:55:34Z
Accepted to MLSys 2025; the newest version with new experiments
null
null
null
null
null
null
null
null
null
2,412.05271
Expanding Performance Boundaries of Open-Source Multimodal Models with Model, Data, and Test-Time Scaling
['Zhe Chen', 'Weiyun Wang', 'Yue Cao', 'Yangzhou Liu', 'Zhangwei Gao', 'Erfei Cui', 'Jinguo Zhu', 'Shenglong Ye', 'Hao Tian', 'Zhaoyang Liu', 'Lixin Gu', 'Xuehui Wang', 'Qingyun Li', 'Yimin Ren', 'Zixuan Chen', 'Jiapeng Luo', 'Jiahao Wang', 'Tan Jiang', 'Bo Wang', 'Conghui He', 'Botian Shi', 'Xingcheng Zhang', 'Han Lv'...
['cs.CV']
We introduce InternVL 2.5, an advanced multimodal large language model (MLLM) series that builds upon InternVL 2.0, maintaining its core model architecture while introducing significant enhancements in training and testing strategies as well as data quality. In this work, we delve into the relationship between model sc...
2024-12-06T18:57:08Z
Technical Report
null
null
null
null
null
null
null
null
null
2,412.05337
ACT-Bench: Towards Action Controllable World Models for Autonomous Driving
['Hidehisa Arai', 'Keishi Ishihara', 'Tsubasa Takahashi', 'Yu Yamaguchi']
['cs.CV', 'cs.LG', 'cs.RO']
World models have emerged as promising neural simulators for autonomous driving, with the potential to supplement scarce real-world data and enable closed-loop evaluations. However, current research primarily evaluates these models based on visual realism or downstream task performance, with limited focus on fidelity t...
2024-12-06T01:06:28Z
null
null
null
ACT-Bench: Towards Action Controllable World Models for Autonomous Driving
['Hidehisa Arai', 'Keishi Ishihara', 'Tsubasa Takahashi', 'Yu Yamaguchi']
2,024
arXiv.org
3
0
['Computer Science']
2,412.05435
UniScene: Unified Occupancy-centric Driving Scene Generation
['Bohan Li', 'Jiazhe Guo', 'Hongsi Liu', 'Yingshuang Zou', 'Yikang Ding', 'Xiwu Chen', 'Hu Zhu', 'Feiyang Tan', 'Chi Zhang', 'Tiancai Wang', 'Shuchang Zhou', 'Li Zhang', 'Xiaojuan Qi', 'Hao Zhao', 'Mu Yang', 'Wenjun Zeng', 'Xin Jin']
['cs.CV']
Generating high-fidelity, controllable, and annotated training data is critical for autonomous driving. Existing methods typically generate a single data form directly from a coarse scene layout, which not only fails to output rich data forms required for diverse downstream tasks but also struggles to model the direct ...
2024-12-06T21:41:52Z
CVPR 2025
null
null
UniScene: Unified Occupancy-centric Driving Scene Generation
['Bo Li', 'Jiazhe Guo', 'Hongsi Liu', 'Yingshuang Zou', 'Yikang Ding', 'Xiwu Chen', 'Hu Zhu', 'Feiyang Tan', 'Chi Zhang', 'Tiancai Wang', 'Shuchang Zhou', 'Li Zhang', 'Xiaojuan Qi', 'Hao Zhao', 'Mu Yang', 'Wenjun Zeng', 'Xin Jin']
2,024
arXiv.org
18
0
['Computer Science']
2,412.05479
LATTE: Learning to Think with Vision Specialists
['Zixian Ma', 'Jianguo Zhang', 'Zhiwei Liu', 'Jieyu Zhang', 'Juntao Tan', 'Manli Shu', 'Juan Carlos Niebles', 'Shelby Heinecke', 'Huan Wang', 'Caiming Xiong', 'Ranjay Krishna', 'Silvio Savarese']
['cs.CV']
While open-source vision-language models perform well on simple question-answering, they still struggle with complex questions that require both perceptual and reasoning capabilities. We propose LATTE, a family of vision-language models that have LeArned to Think wiTh vision spEcialists. By offloading perception to sta...
2024-12-07T00:42:04Z
null
null
null
LATTE: Learning to Think with Vision Specialists
['Zixian Ma', 'Jianguo Zhang', 'Zhiwei Liu', 'Jieyu Zhang', 'Juntao Tan', 'Manli Shu', 'Juan Carlos Niebles', 'Shelby Heinecke', 'Huan Wang', 'Caiming Xiong', 'Ranjay Krishna', 'Silvio Savarese']
2,024
null
3
47
['Computer Science']
2,412.05756
Compositional Image Retrieval via Instruction-Aware Contrastive Learning
['Wenliang Zhong', 'Weizhi An', 'Feng Jiang', 'Hehuan Ma', 'Yuzhi Guo', 'Junzhou Huang']
['cs.CV']
Composed Image Retrieval (CIR) involves retrieving a target image based on a composed query of an image paired with text that specifies modifications or changes to the visual reference. CIR is inherently an instruction-following task, as the model needs to interpret and apply modifications to the image. In practice, du...
2024-12-07T22:46:52Z
9 pages, 8 figures
null
null
null
null
null
null
null
null
null
2,412.05888
MCP-MedSAM: A Powerful Lightweight Medical Segment Anything Model Trained with a Single GPU in Just One Day
['Donghang Lyu', 'Ruochen Gao', 'Marius Staring']
['cs.CV']
Medical image segmentation involves partitioning medical images into meaningful regions, with a focus on identifying anatomical structures and lesions. It has broad applications in healthcare, and deep learning methods have enabled significant advancements in automating this process. Recently, the introduction of the S...
2024-12-08T10:50:59Z
Accepted for publication at the Journal of Machine Learning for Biomedical Imaging (MELBA)
Machine.Learning.for.Biomedical.Imaging. 3 (2025)
10.59275/j.melba.2025-4849
null
null
null
null
null
null
null
2,412.05983
Chimera: Improving Generalist Model with Domain-Specific Experts
['Tianshuo Peng', 'Mingsheng Li', 'Hongbin Zhou', 'Renqiu Xia', 'Renrui Zhang', 'Lei Bai', 'Song Mao', 'Bin Wang', 'Conghui He', 'Aojun Zhou', 'Botian Shi', 'Tao Chen', 'Bo Zhang', 'Xiangyu Yue']
['cs.CV']
Recent advancements in Large Multi-modal Models (LMMs) underscore the importance of scaling by increasing image-text paired data, achieving impressive performance on general tasks. Despite their effectiveness in broad applications, generalist models are primarily trained on web-scale datasets dominated by natural image...
2024-12-08T16:10:42Z
Chimera Homepage: https://alpha-innovator.github.io/chimera_page
null
null
null
null
null
null
null
null
null
2,412.06089
GraPE: A Generate-Plan-Edit Framework for Compositional T2I Synthesis
['Ashish Goswami', 'Satyam Kumar Modi', 'Santhosh Rishi Deshineni', 'Harman Singh', 'Prathosh A. P', 'Parag Singla']
['cs.CV']
Text-to-image (T2I) generation has seen significant progress with diffusion models, enabling generation of photo-realistic images from text prompts. Despite this progress, existing methods still face challenges in following complex text prompts, especially those requiring compositional and multi-step reasoning. Given s...
2024-12-08T22:29:56Z
null
null
null
null
null
null
null
null
null
null
2,412.06234
Generative Densification: Learning to Densify Gaussians for High-Fidelity Generalizable 3D Reconstruction
['Seungtae Nam', 'Xiangyu Sun', 'Gyeongjin Kang', 'Younggeun Lee', 'Seungjun Oh', 'Eunbyung Park']
['cs.CV', 'cs.GR']
Generalized feed-forward Gaussian models have achieved significant progress in sparse-view 3D reconstruction by leveraging prior knowledge from large multi-view datasets. However, these models often struggle to represent high-frequency details due to the limited number of Gaussians. While the densification strategy use...
2024-12-09T06:20:51Z
Project page: https://stnamjef.github.io/GenerativeDensification/
null
null
Generative Densification: Learning to Densify Gaussians for High-Fidelity Generalizable 3D Reconstruction
['Seungtae Nam', 'Xiangyu Sun', 'Gyeongjin Kang', 'Younggeun Lee', 'Seungjun Oh', 'Eunbyung Park']
2,024
arXiv.org
0
47
['Computer Science']
2,412.06244
Unbiased Region-Language Alignment for Open-Vocabulary Dense Prediction
['Yunheng Li', 'Yuxuan Li', 'Quansheng Zeng', 'Wenhai Wang', 'Qibin Hou', 'Ming-Ming Cheng']
['cs.CV']
Pre-trained vision-language models (VLMs), such as CLIP, have demonstrated impressive zero-shot recognition capability, but still underperform in dense prediction tasks. Self-distillation recently is emerging as a promising approach for fine-tuning VLMs to better adapt to local regions without requiring extensive annot...
2024-12-09T06:34:23Z
null
null
null
null
null
null
null
null
null
null
2,412.06272
Evaluating LLM-based Approaches to Legal Citation Prediction: Domain-specific Pre-training, Fine-tuning, or RAG? A Benchmark and an Australian Law Case Study
['Jiuzhou Han', 'Paul Burgess', 'Ehsan Shareghi']
['cs.CL', 'cs.AI', 'cs.IR']
Large Language Models (LLMs) have demonstrated strong potential across legal tasks, yet the problem of legal citation prediction remains under-explored. At its core, this task demands fine-grained contextual understanding and precise identification of relevant legislation or precedent. We introduce the AusLaw Citation ...
2024-12-09T07:46:14Z
For code, data, and models see https://auslawbench.github.io
null
null
null
null
null
null
null
null
null
2,412.06322
LLaVA-SpaceSGG: Visual Instruct Tuning for Open-vocabulary Scene Graph Generation with Enhanced Spatial Relations
['Mingjie Xu', 'Mengyang Wu', 'Yuzhi Zhao', 'Jason Chun Lok Li', 'Weifeng Ou']
['cs.CV']
Scene Graph Generation (SGG) converts visual scenes into structured graph representations, providing deeper scene understanding for complex vision tasks. However, existing SGG models often overlook essential spatial relationships and struggle with generalization in open-vocabulary contexts. To address these limitations...
2024-12-09T09:18:32Z
Accepted by the WACV 2025, including supplementary material
null
null
null
null
null
null
null
null
null
2,412.06329
Normalizing Flows are Capable Generative Models
['Shuangfei Zhai', 'Ruixiang Zhang', 'Preetum Nakkiran', 'David Berthelot', 'Jiatao Gu', 'Huangjie Zheng', 'Tianrong Chen', 'Miguel Angel Bautista', 'Navdeep Jaitly', 'Josh Susskind']
['cs.CV', 'cs.LG']
Normalizing Flows (NFs) are likelihood-based models for continuous inputs. They have demonstrated promising results on both density estimation and generative modeling tasks, but have received relatively little attention in recent years. In this work, we demonstrate that NFs are more powerful than previously believed. W...
2024-12-09T09:28:06Z
ICML 2025
null
null
null
null
null
null
null
null
null
2,412.0641
BatchTopK Sparse Autoencoders
['Bart Bussmann', 'Patrick Leask', 'Neel Nanda']
['cs.LG', 'cs.AI', 'stat.ML']
Sparse autoencoders (SAEs) have emerged as a powerful tool for interpreting language model activations by decomposing them into sparse, interpretable features. A popular approach is the TopK SAE, that uses a fixed number of the most active latents per sample to reconstruct the model activations. We introduce BatchTopK ...
2024-12-09T11:39:00Z
null
null
null
null
null
null
null
null
null
null
2,412.06464
Gated Delta Networks: Improving Mamba2 with Delta Rule
['Songlin Yang', 'Jan Kautz', 'Ali Hatamizadeh']
['cs.CL', 'cs.LG']
Linear Transformers have gained attention as efficient alternatives to standard Transformers, but their performance in retrieval and long-context tasks has been limited. To address these limitations, recent work has explored two distinct mechanisms: gating for adaptive memory control and the delta update rule for preci...
2024-12-09T13:09:04Z
ICLR 2025 camera ready
null
null
null
null
null
null
null
null
null
2,412.06484
Small Languages, Big Models: A Study of Continual Training on Languages of Norway
['David Samuel', 'Vladislav Mikhailov', 'Erik Velldal', 'Lilja Øvrelid', 'Lucas Georges Gabriel Charpentier', 'Andrey Kutuzov', 'Stephan Oepen']
['cs.CL']
Training large language models requires vast amounts of data, posing a challenge for less widely spoken languages like Norwegian and even more so for truly low-resource languages like Northern S\'ami. To address this issue, we present a novel three-stage continual training approach that substantially improves the downs...
2024-12-09T13:34:23Z
Published at NoDaLiDa 2025
Proceedings of the 25th Nordic Conference on Computational Linguistics (NoDaLiDa 2025). Tallinn, Estonia
null
null
null
null
null
null
null
null
2,412.06559
ProcessBench: Identifying Process Errors in Mathematical Reasoning
['Chujie Zheng', 'Zhenru Zhang', 'Beichen Zhang', 'Runji Lin', 'Keming Lu', 'Bowen Yu', 'Dayiheng Liu', 'Jingren Zhou', 'Junyang Lin']
['cs.AI', 'cs.CL', 'cs.LG']
As language models regularly make mistakes when solving math problems, automated identification of errors in the reasoning process becomes increasingly significant for their scalable oversight. In this paper, we introduce ProcessBench for measuring the ability to identify erroneous steps in mathematical reasoning. It c...
2024-12-09T15:11:40Z
ACL 2025
null
null
ProcessBench: Identifying Process Errors in Mathematical Reasoning
['Chujie Zheng', 'Zhenru Zhang', 'Beichen Zhang', 'Runji Lin', 'Keming Lu', 'Bowen Yu', 'Dayiheng Liu', 'Jingren Zhou', 'Junyang Lin']
2,024
arXiv.org
77
29
['Computer Science']
2,412.06699
You See it, You Got it: Learning 3D Creation on Pose-Free Videos at Scale
['Baorui Ma', 'Huachen Gao', 'Haoge Deng', 'Zhengxiong Luo', 'Tiejun Huang', 'Lulu Tang', 'Xinlong Wang']
['cs.CV']
Recent 3D generation models typically rely on limited-scale 3D `gold-labels' or 2D diffusion priors for 3D content creation. However, their performance is upper-bounded by constrained 3D priors due to the lack of scalable learning paradigms. In this work, we present See3D, a visual-conditional multi-view diffusion mode...
2024-12-09T17:44:56Z
Accepted by CVPR 2025, Project Page: https://vision.baai.ac.cn/see3d
null
null
null
null
null
null
null
null
null
2,412.06769
Training Large Language Models to Reason in a Continuous Latent Space
['Shibo Hao', 'Sainbayar Sukhbaatar', 'DiJia Su', 'Xian Li', 'Zhiting Hu', 'Jason Weston', 'Yuandong Tian']
['cs.CL']
Large language models (LLMs) are restricted to reason in the "language space", where they typically express the reasoning process with a chain-of-thought (CoT) to solve a complex reasoning problem. However, we argue that language space may not always be optimal for reasoning. For example, most word tokens are primarily...
2024-12-09T18:55:56Z
null
null
null
null
null
null
null
null
null
null
2,412.06781
Around the World in 80 Timesteps: A Generative Approach to Global Visual Geolocation
['Nicolas Dufour', 'David Picard', 'Vicky Kalogeiton', 'Loic Landrieu']
['cs.CV', 'cs.LG']
Global visual geolocation predicts where an image was captured on Earth. Since images vary in how precisely they can be localized, this task inherently involves a significant degree of ambiguity. However, existing approaches are deterministic and overlook this aspect. In this paper, we aim to close the gap between trad...
2024-12-09T18:59:04Z
Project page: https://nicolas-dufour.github.io/plonk
null
null
Around the World in 80 Timesteps: A Generative Approach to Global Visual Geolocation
['Nicolas Dufour', 'David Picard', 'Vicky Kalogeiton', 'Loic Landrieu']
2,024
arXiv.org
2
0
['Computer Science']
2,412.06782
CARP: Visuomotor Policy Learning via Coarse-to-Fine Autoregressive Prediction
['Zhefei Gong', 'Pengxiang Ding', 'Shangke Lyu', 'Siteng Huang', 'Mingyang Sun', 'Wei Zhao', 'Zhaoxin Fan', 'Donglin Wang']
['cs.RO', 'cs.CV']
In robotic visuomotor policy learning, diffusion-based models have achieved significant success in improving the accuracy of action trajectory generation compared to traditional autoregressive models. However, they suffer from inefficiency due to multiple denoising steps and limited flexibility from complex constraints...
2024-12-09T18:59:18Z
null
null
null
null
null
null
null
null
null
null
2,412.06787
[MASK] is All You Need
['Vincent Tao Hu', 'Björn Ommer']
['cs.CV', 'cs.AI']
In generative models, two paradigms have gained attraction in various applications: next-set prediction-based Masked Generative Models and next-noise prediction-based Non-Autoregressive Models, e.g., Diffusion Models. In this work, we propose using discrete-state models to connect them and explore their scalability in ...
2024-12-09T18:59:56Z
Technical Report (WIP), Project Page(code, model, dataset): https://compvis.github.io/mask/
null
null
null
null
null
null
null
null
null
2,412.06845
7B Fully Open Source Moxin-LLM/VLM -- From Pretraining to GRPO-based Reinforcement Learning Enhancement
['Pu Zhao', 'Xuan Shen', 'Zhenglun Kong', 'Yixin Shen', 'Sung-En Chang', 'Arash Akbari', 'Timothy Rupprecht', 'Lei Lu', 'Enfu Nan', 'Changdi Yang', 'Yumei He', 'Weiyan Shi', 'Xingchen Xu', 'Yu Huang', 'Wei Jiang', 'Wei Wang', 'Yue Chen', 'Yong He', 'Yanzhi Wang']
['cs.CL', 'cs.AI', 'cs.LG']
Recently, Large Language Models (LLMs) have undergone a significant transformation, marked by a rapid rise in both their popularity and capabilities. Leading this evolution are proprietary LLMs like GPT-4 and GPT-o1, which have captured widespread attention in the AI community due to their remarkable performance and ve...
2024-12-08T02:01:46Z
null
null
null
7B Fully Open Source Moxin-LLM/VLM -- From Pretraining to GRPO-based Reinforcement Learning Enhancement
['Pu Zhao', 'Xuan Shen', 'Zhenglun Kong', 'Yixin Shen', 'Sung-En Chang', 'Timothy Rupprecht', 'Lei Lu', 'Enfu Nan', 'Changdi Yang', 'Yumei He', 'Xingchen Xu', 'Yu Huang', 'Wei Wang', 'Yue Chen', 'Yongchun He', 'Yanzhi Wang']
2,024
null
1
130
['Computer Science']
2,412.06974
MV-DUSt3R+: Single-Stage Scene Reconstruction from Sparse Views In 2 Seconds
['Zhenggang Tang', 'Yuchen Fan', 'Dilin Wang', 'Hongyu Xu', 'Rakesh Ranjan', 'Alexander Schwing', 'Zhicheng Yan']
['cs.CV', 'cs.AI']
Recent sparse multi-view scene reconstruction advances like DUSt3R and MASt3R no longer require camera calibration and camera pose estimation. However, they only process a pair of views at a time to infer pixel-aligned pointmaps. When dealing with more than two views, a combinatorial number of error prone pairwise reco...
2024-12-09T20:34:55Z
null
null
null
MV-DUSt3R+: Single-Stage Scene Reconstruction from Sparse Views In 2 Seconds
['Zhenggang Tang', 'Yuchen Fan', 'Dilin Wang', 'Hongyu Xu', 'Rakesh Ranjan', 'Alexander G. Schwing', 'Zhicheng Yan']
2,024
arXiv.org
18
0
['Computer Science']
2,412.06993
Toward AI-Driven Digital Organism: Multiscale Foundation Models for Predicting, Simulating and Programming Biology at All Levels
['Le Song', 'Eran Segal', 'Eric Xing']
['cs.AI', 'cs.LG', 'q-bio.QM']
We present an approach of using AI to model and simulate biology and life. Why is it important? Because at the core of medicine, pharmacy, public health, longevity, agriculture and food security, environmental protection, and clean energy, it is biology at work. Biology in the physical world is too complex to manipulat...
2024-12-09T20:59:59Z
null
null
null
null
null
null
null
null
null
null
2,412.07112
Maya: An Instruction Finetuned Multilingual Multimodal Model
['Nahid Alam', 'Karthik Reddy Kanjula', 'Surya Guthikonda', 'Timothy Chung', 'Bala Krishna S Vegesna', 'Abhipsha Das', 'Anthony Susevski', 'Ryan Sze-Yin Chan', 'S M Iftekhar Uddin', 'Shayekh Bin Islam', 'Roshan Santhosh', 'Snegha A', 'Drishti Sharma', 'Chen Liu', 'Isha Chaturvedi', 'Genta Indra Winata', 'Ashvanth. S', ...
['cs.CV', 'cs.CL']
The rapid development of large Vision-Language Models (VLMs) has led to impressive results on academic benchmarks, primarily in widely spoken languages. However, significant gaps remain in the ability of current VLMs to handle low-resource languages and varied cultural contexts, largely due to a lack of high-quality, d...
2024-12-10T01:57:17Z
null
null
null
null
null
null
null
null
null
null
2,412.07338
Contextualized Counterspeech: Strategies for Adaptation, Personalization, and Evaluation
['Lorenzo Cima', 'Alessio Miaschi', 'Amaury Trujillo', 'Marco Avvenuti', "Felice Dell'Orletta", 'Stefano Cresci']
['cs.HC', 'cs.AI', 'cs.SI']
AI-generated counterspeech offers a promising and scalable strategy to curb online toxicity through direct replies that promote civil discourse. However, current counterspeech is one-size-fits-all, lacking adaptation to the moderation context and the users involved. We propose and evaluate multiple strategies for gener...
2024-12-10T09:29:52Z
Article published in WebConf 25, 34th ACM Web Conference. Please, cite the published version
WebConf 2025, 34th ACM Web Conference
10.1145/3696410.3714507
null
null
null
null
null
null
null
2,412.0736
Efficient 3D Recognition with Event-driven Spike Sparse Convolution
['Xuerui Qiu', 'Man Yao', 'Jieyuan Zhang', 'Yuhong Chou', 'Ning Qiao', 'Shibo Zhou', 'Bo Xu', 'Guoqi Li']
['cs.CV']
Spiking Neural Networks (SNNs) provide an energy-efficient way to extract 3D spatio-temporal features. Point clouds are sparse 3D spatial data, which suggests that SNNs should be well-suited for processing them. However, when applying SNNs to point clouds, they often exhibit limited performance and fewer application sc...
2024-12-10T09:55:15Z
Accepted by AAAI 2025
null
null
null
null
null
null
null
null
null
2,412.07371
PRM: Photometric Stereo based Large Reconstruction Model
['Wenhang Ge', 'Jiantao Lin', 'Guibao Shen', 'Jiawei Feng', 'Tao Hu', 'Xinli Xu', 'Ying-Cong Chen']
['cs.CV', 'cs.GR']
We propose PRM, a novel photometric stereo based large reconstruction model to reconstruct high-quality meshes with fine-grained local details. Unlike previous large reconstruction models that prepare images under fixed and simple lighting as both input and supervision, PRM renders photometric stereo images by varying ...
2024-12-10T10:11:15Z
https://wenhangge.github.io/PRM/
null
null
PRM: Photometric Stereo based Large Reconstruction Model
['Wenhang Ge', 'Jiantao Lin', 'Guibao Shen', 'Jiawei Feng', 'Tao Hu', 'Xinli Xu', 'Ying-Cong Chen']
2,024
arXiv.org
2
51
['Computer Science']
2,412.07589
DiffSensei: Bridging Multi-Modal LLMs and Diffusion Models for Customized Manga Generation
['Jianzong Wu', 'Chao Tang', 'Jingbo Wang', 'Yanhong Zeng', 'Xiangtai Li', 'Yunhai Tong']
['cs.CV']
Story visualization, the task of creating visual narratives from textual descriptions, has seen progress with text-to-image generation models. However, these models often lack effective control over character appearances and interactions, particularly in multi-character scenes. To address these limitations, we propose ...
2024-12-10T15:24:12Z
[CVPR 2025] The project page is https://jianzongwu.github.io/projects/diffsensei/
null
null
DiffSensei: Bridging Multi-Modal LLMs and Diffusion Models for Customized Manga Generation
['Jianzong Wu', 'Chao Tang', 'Jingbo Wang', 'Yanhong Zeng', 'Xiangtai Li', 'Yunhai Tong']
2,024
arXiv.org
5
52
['Computer Science']
2,412.07633
ChocoLlama: Lessons Learned From Teaching Llamas Dutch
['Matthieu Meeus', 'Anthony Rathé', 'François Remy', 'Pieter Delobelle', 'Jens-Joris Decorte', 'Thomas Demeester']
['cs.CL']
While Large Language Models (LLMs) have shown remarkable capabilities in natural language understanding and generation, their performance often lags in lower-resource, non-English languages due to biases in the training data. In this work, we explore strategies for adapting the primarily English LLMs (Llama-2 and Llama...
2024-12-10T16:13:58Z
null
null
null
null
null
null
null
null
null
null
2,412.07679
RADIOv2.5: Improved Baselines for Agglomerative Vision Foundation Models
['Greg Heinrich', 'Mike Ranzinger', 'Hongxu', 'Yin', 'Yao Lu', 'Jan Kautz', 'Andrew Tao', 'Bryan Catanzaro', 'Pavlo Molchanov']
['cs.CV', 'cs.AI']
Agglomerative models have recently emerged as a powerful approach to training vision foundation models, leveraging multi-teacher distillation from existing models such as CLIP, DINO, and SAM. This strategy enables the efficient creation of robust models, combining the strengths of individual teachers while significantl...
2024-12-10T17:06:41Z
null
null
null
RADIOv2.5: Improved Baselines for Agglomerative Vision Foundation Models
['Greg Heinrich', 'Michael Ranzinger', 'Hongxu Yin', 'Yao Lu', 'Jan Kautz', 'Andrew Tao', 'Bryan Catanzaro', 'Pavlo Molchanov']
2,024
null
4
0
['Computer Science']
2,412.07689
DriveMM: All-in-One Large Multimodal Model for Autonomous Driving
['Zhijian Huang', 'Chengjian Feng', 'Feng Yan', 'Baihui Xiao', 'Zequn Jie', 'Yujie Zhong', 'Xiaodan Liang', 'Lin Ma']
['cs.CV', 'cs.MM', 'cs.RO']
Large Multimodal Models (LMMs) have demonstrated exceptional comprehension and interpretation capabilities in Autonomous Driving (AD) by incorporating large language models. Despite the advancements, current data-driven AD approaches tend to concentrate on a single dataset and specific tasks, neglecting their overall c...
2024-12-10T17:27:32Z
null
null
null
null
null
null
null
null
null
null
2,412.07724
Granite Guardian
['Inkit Padhi', 'Manish Nagireddy', 'Giandomenico Cornacchia', 'Subhajit Chaudhury', 'Tejaswini Pedapati', 'Pierre Dognin', 'Keerthiram Murugesan', 'Erik Miehling', 'Martín Santillán Cooper', 'Kieran Fraser', 'Giulio Zizzo', 'Muhammad Zaid Hameed', 'Mark Purcell', 'Michael Desmond', 'Qian Pan', 'Zahra Ashktorab', 'Inge...
['cs.CL']
We introduce the Granite Guardian models, a suite of safeguards designed to provide risk detection for prompts and responses, enabling safe and responsible use in combination with any large language model (LLM). These models offer comprehensive coverage across multiple risk dimensions, including social bias, profanity,...
2024-12-10T18:17:02Z
null
null
null
Granite Guardian
['Inkit Padhi', 'Manish Nagireddy', 'Giandomenico Cornacchia', 'Subhajit Chaudhury', 'Tejaswini Pedapati', 'Pierre L. Dognin', 'K. Murugesan', 'Erik Miehling', 'Martín Santillán Cooper', 'Kieran Fraser', 'Giulio Zizzo', 'Muhammad Zaid Hameed', 'Mark Purcell', 'Michael Desmond', 'Qian Pan', 'Inge Vejsbjerg', 'Elizabeth ...
2,024
arXiv.org
6
0
['Computer Science']
2,412.07755
SAT: Dynamic Spatial Aptitude Training for Multimodal Language Models
['Arijit Ray', 'Jiafei Duan', 'Ellis Brown', 'Reuben Tan', 'Dina Bashkirova', 'Rose Hendrix', 'Kiana Ehsani', 'Aniruddha Kembhavi', 'Bryan A. Plummer', 'Ranjay Krishna', 'Kuo-Hao Zeng', 'Kate Saenko']
['cs.CV', 'cs.AI', 'cs.GR', 'cs.RO']
Reasoning about motion and space is a fundamental cognitive capability that is required by multiple real-world applications. While many studies highlight that large multimodal language models (MLMs) struggle to reason about space, they only focus on static spatial relationships, and not dynamic awareness of motion and ...
2024-12-10T18:52:45Z
Project webpage: https://arijitray.com/SAT/
null
null
null
null
null
null
null
null
null
2,412.07761
Repurposing Pre-trained Video Diffusion Models for Event-based Video Interpolation
['Jingxi Chen', 'Brandon Y. Feng', 'Haoming Cai', 'Tianfu Wang', 'Levi Burner', 'Dehao Yuan', 'Cornelia Fermuller', 'Christopher A. Metzler', 'Yiannis Aloimonos']
['cs.CV']
Video Frame Interpolation aims to recover realistic missing frames between observed frames, generating a high-frame-rate video from a low-frame-rate video. However, without additional guidance, the large motion between frames makes this problem ill-posed. Event-based Video Frame Interpolation (EVFI) addresses this chal...
2024-12-10T18:55:30Z
Accepted to CVPR 2025
null
null
Repurposing Pre-trained Video Diffusion Models for Event-based Video Interpolation
['Jingxi Chen', 'Brandon Y. Feng', 'Haoming Cai', 'Tianfu Wang', 'Levi Burner', 'Dehao Yuan', 'C. Fermüller', 'Christopher A. Metzler', 'Y. Aloimonos']
2,024
arXiv.org
3
54
['Computer Science']
2,412.07767
Learning Visual Generative Priors without Text
['Shuailei Ma', 'Kecheng Zheng', 'Ying Wei', 'Wei Wu', 'Fan Lu', 'Yifei Zhang', 'Chen-Wei Xie', 'Biao Gong', 'Jiapeng Zhu', 'Yujun Shen']
['cs.CV']
Although text-to-image (T2I) models have recently thrived as visual generative priors, their reliance on high-quality text-image pairs makes scaling up expensive. We argue that grasping the cross-modality alignment is not a necessity for a sound visual generative prior, whose focus should be on texture modeling. Such a...
2024-12-10T18:59:31Z
Project Page: https://ant-research.github.io/lumos
null
null
Learning Visual Generative Priors without Text
['Shuailei Ma', 'Kecheng Zheng', 'Ying Wei', 'Wei Wu', 'Fan Lu', 'Yifei Zhang', 'Chen-Wei Xie', 'Biao Gong', 'Jiapeng Zhu', 'Yujun Shen']
2,024
Computer Vision and Pattern Recognition
3
56
['Computer Science']
2,412.07769
BiMediX2: Bio-Medical EXpert LMM for Diverse Medical Modalities
['Sahal Shaji Mullappilly', 'Mohammed Irfan Kurpath', 'Sara Pieri', 'Saeed Yahya Alseiari', 'Shanavas Cholakkal', 'Khaled Aldahmani', 'Fahad Khan', 'Rao Anwer', 'Salman Khan', 'Timothy Baldwin', 'Hisham Cholakkal']
['cs.CV']
This paper introduces BiMediX2, a bilingual (Arabic-English) Bio-Medical EXpert Large Multimodal Model (LMM) with a unified architecture that integrates text and visual modalities, enabling advanced image understanding and medical applications. BiMediX2 leverages the Llama3.1 architecture and integrates text and visual...
2024-12-10T18:59:35Z
null
null
null
BiMediX2: Bio-Medical EXpert LMM for Diverse Medical Modalities
['Sahal Shaji Mullappilly', 'Mohammed Irfan Kurpath', 'Sara Pieri', 'Saeed Yahya Alseiari', 'Shanavas Cholakkal', 'Khaled Aldahmani', 'F. Khan', 'R. Anwer', 'Salman H. Khan', 'Timothy Baldwin', 'Hisham Cholakkal']
2,024
arXiv.org
3
0
['Computer Science']
2,412.07771
PETALface: Parameter Efficient Transfer Learning for Low-resolution Face Recognition
['Kartik Narayan', 'Nithin Gopalakrishnan Nair', 'Jennifer Xu', 'Rama Chellappa', 'Vishal M. Patel']
['cs.CV']
Pre-training on large-scale datasets and utilizing margin-based loss functions have been highly successful in training models for high-resolution face recognition. However, these models struggle with low-resolution face datasets, in which the faces lack the facial attributes necessary for distinguishing different faces...
2024-12-10T18:59:45Z
Accepted to WACV 2025. Project Page: https://kartik-3004.github.io/PETALface/
null
null
null
null
null
null
null
null
null
2,412.07772
From Slow Bidirectional to Fast Autoregressive Video Diffusion Models
['Tianwei Yin', 'Qiang Zhang', 'Richard Zhang', 'William T. Freeman', 'Fredo Durand', 'Eli Shechtman', 'Xun Huang']
['cs.CV']
Current video diffusion models achieve impressive generation quality but struggle in interactive applications due to bidirectional attention dependencies. The generation of a single frame requires the model to process the entire sequence, including the future. We address this limitation by adapting a pretrained bidirec...
2024-12-10T18:59:50Z
Project Page: https://causvid.github.io/
null
null
null
null
null
null
null
null
null
2,412.07992
Concept Bottleneck Large Language Models
['Chung-En Sun', 'Tuomas Oikarinen', 'Berk Ustun', 'Tsui-Wei Weng']
['cs.CL', 'cs.LG']
We introduce Concept Bottleneck Large Language Models (CB-LLMs), a novel framework for building inherently interpretable Large Language Models (LLMs). In contrast to traditional black-box LLMs that rely on limited post-hoc interpretations, CB-LLMs integrate intrinsic interpretability directly into the LLMs -- allowing ...
2024-12-11T00:04:10Z
Accepted to ICLR 2025. arXiv admin note: substantial text overlap with arXiv:2407.04307
null
null
null
null
null
null
null
null
null
2,412.08347
SmolTulu: Higher Learning Rate to Batch Size Ratios Can Lead to Better Reasoning in SLMs
['Sultan Alrashed']
['cs.CL', 'cs.AI']
We present SmolTulu-1.7b-Instruct, referenced in this report as SmolTulu-DPO-1130, an instruction-tuned language model that adapts AllenAI's Tulu 3 post-training pipeline to enhance Huggingface's SmolLM2-1.7B base model. Through comprehensive empirical analysis using a 135M parameter model, we demonstrate that the rela...
2024-12-11T12:41:36Z
10 pages, 4 figures, and 13 tables. For the SmolTulu-1.7b-instruct model, see: https://huggingface.co/SultanR/SmolTulu-1.7b-Instruct
null
null
SmolTulu: Higher Learning Rate to Batch Size Ratios Can Lead to Better Reasoning in SLMs
['Sultan Alrashed']
2,024
arXiv.org
2
0
['Computer Science']
2,412.08376
Reloc3r: Large-Scale Training of Relative Camera Pose Regression for Generalizable, Fast, and Accurate Visual Localization
['Siyan Dong', 'Shuzhe Wang', 'Shaohui Liu', 'Lulu Cai', 'Qingnan Fan', 'Juho Kannala', 'Yanchao Yang']
['cs.CV']
Visual localization aims to determine the camera pose of a query image relative to a database of posed images. In recent years, deep neural networks that directly regress camera poses have gained popularity due to their fast inference capabilities. However, existing methods struggle to either generalize well to new sce...
2024-12-11T13:36:18Z
CVPR 2025
null
null
Reloc3r: Large-Scale Training of Relative Camera Pose Regression for Generalizable, Fast, and Accurate Visual Localization
['Siyan Dong', 'Shuzhe Wang', 'Shaohui Liu', 'Lulu Cai', 'Qingnan Fan', 'Juho Kannala', 'Yanchao Yang']
2,024
arXiv.org
6
135
['Computer Science']
2,412.08443
POINTS1.5: Building a Vision-Language Model towards Real World Applications
['Yuan Liu', 'Le Tian', 'Xiao Zhou', 'Xinyu Gao', 'Kavio Yu', 'Yang Yu', 'Jie Zhou']
['cs.CV', 'cs.MM']
Vision-language models have made significant strides recently, demonstrating superior performance across a range of tasks, e.g. optical character recognition and complex diagram analysis. Building on this trend, we introduce a new vision-language model, POINTS1.5, designed to excel in various real-world applications. P...
2024-12-11T15:08:25Z
null
null
null
POINTS1.5: Building a Vision-Language Model towards Real World Applications
['Yuan Liu', 'Le Tian', 'Xiao Zhou', 'Xinyu Gao', 'Kavio Yu', 'Yang Yu', 'Jie Zhou']
2,024
arXiv.org
4
0
['Computer Science']
2,412.08486
Learning Flow Fields in Attention for Controllable Person Image Generation
['Zijian Zhou', 'Shikun Liu', 'Xiao Han', 'Haozhe Liu', 'Kam Woh Ng', 'Tian Xie', 'Yuren Cong', 'Hang Li', 'Mengmeng Xu', 'Juan-Manuel Pérez-Rúa', 'Aditya Patel', 'Tao Xiang', 'Miaojing Shi', 'Sen He']
['cs.CV']
Controllable person image generation aims to generate a person image conditioned on reference images, allowing precise control over the person's appearance or pose. However, prior methods often distort fine-grained textural details from the reference image, despite achieving high overall image quality. We attribute the...
2024-12-11T15:51:14Z
github: https://github.com/franciszzj/Leffa, demo: https://huggingface.co/spaces/franciszzj/Leffa, model: https://huggingface.co/franciszzj/Leffa
null
null
Learning Flow Fields in Attention for Controllable Person Image Generation
['Zijian Zhou', 'Shikun Liu', 'Xiao Han', 'Haozhe Liu', 'KamWoh Ng', 'Tian Xie', 'Yuren Cong', 'Hang Li', 'Mengmeng Xu', "Juan-Manuel P'erez-R'ua", 'Aditya Patel', 'Tao Xiang', 'Miaojing Shi', 'Sen He']
2,024
arXiv.org
2
0
['Computer Science']
2,412.08573
TryOffAnyone: Tiled Cloth Generation from a Dressed Person
['Ioannis Xarchakos', 'Theodoros Koukopoulos']
['cs.CV']
The fashion industry is increasingly leveraging computer vision and deep learning technologies to enhance online shopping experiences and operational efficiencies. In this paper, we address the challenge of generating high-fidelity tiled garment images essential for personalized recommendations, outfit composition, and...
2024-12-11T17:41:53Z
null
null
null
null
null
null
null
null
null
null
2,412.08591
RoomTour3D: Geometry-Aware Video-Instruction Tuning for Embodied Navigation
['Mingfei Han', 'Liang Ma', 'Kamila Zhumakhanova', 'Ekaterina Radionova', 'Jingyi Zhang', 'Xiaojun Chang', 'Xiaodan Liang', 'Ivan Laptev']
['cs.CV', 'cs.AI', 'cs.RO']
Vision-and-Language Navigation (VLN) suffers from the limited diversity and scale of training data, primarily constrained by the manual curation of existing simulators. To address this, we introduce RoomTour3D, a video-instruction dataset derived from web-based room tour videos that capture real-world indoor spaces and...
2024-12-11T18:10:21Z
CVPR2025
null
null
null
null
null
null
null
null
null
2,412.08637
DMin: Scalable Training Data Influence Estimation for Diffusion Models
['Huawei Lin', 'Yingjie Lao', 'Weijie Zhao']
['cs.CV', 'cs.AI', 'cs.LG']
Identifying the training data samples that most influence a generated image is a critical task in understanding diffusion models (DMs), yet existing influence estimation methods are constrained to small-scale or LoRA-tuned models due to computational limitations. To address this challenge, we propose DMin (Diffusion Mo...
2024-12-11T18:58:40Z
14 pages, 6 figures, 8 tables. Under Review
null
null
DMin: Scalable Training Data Influence Estimation for Diffusion Models
['Huawei Lin', 'Yingjie Lao', 'Weijie Zhao']
2,024
arXiv.org
3
39
['Computer Science']
2,412.08647
SegFace: Face Segmentation of Long-Tail Classes
['Kartik Narayan', 'Vibashan VS', 'Vishal M. Patel']
['cs.CV']
Face parsing refers to the semantic segmentation of human faces into key facial regions such as eyes, nose, hair, etc. It serves as a prerequisite for various advanced applications, including face editing, face swapping, and facial makeup, which often require segmentation masks for classes like eyeglasses, hats, earrin...
2024-12-11T18:59:57Z
Accepted to AAAI 2025. Project Page: https://kartik-3004.github.io/SegFace/
null
null
null
null
null
null
null
null
null
2,412.08686
LatentQA: Teaching LLMs to Decode Activations Into Natural Language
['Alexander Pan', 'Lijie Chen', 'Jacob Steinhardt']
['cs.CL', 'cs.CY', 'cs.LG']
Interpretability methods seek to understand language model representations, yet the outputs of most such methods -- circuits, vectors, scalars -- are not immediately human-interpretable. In response, we introduce LatentQA, the task of answering open-ended questions about model activations in natural language. Towards s...
2024-12-11T18:59:33Z
Project page is at https://latentqa.github.io
null
null
null
null
null
null
null
null
null
2,412.08687
VisionArena: 230K Real World User-VLM Conversations with Preference Labels
['Christopher Chou', 'Lisa Dunlap', 'Koki Mashita', 'Krishna Mandal', 'Trevor Darrell', 'Ion Stoica', 'Joseph E. Gonzalez', 'Wei-Lin Chiang']
['cs.CV']
With the growing adoption and capabilities of vision-language models (VLMs) comes the need for benchmarks that capture authentic user-VLM interactions. In response, we create VisionArena, a dataset of 230K real-world conversations between users and VLMs. Collected from Chatbot Arena - an open-source platform where user...
2024-12-11T18:59:46Z
updated for CVPR Camera Ready
null
null
null
null
null
null
null
null
null
2,412.08737
Euclid: Supercharging Multimodal LLMs with Synthetic High-Fidelity Visual Descriptions
['Jiarui Zhang', 'Ollie Liu', 'Tianyu Yu', 'Jinyi Hu', 'Willie Neiswanger']
['cs.CV', 'cs.AI', 'cs.CL']
Multimodal large language models (MLLMs) have made rapid progress in recent years, yet continue to struggle with low-level visual perception (LLVP) -- particularly the ability to accurately describe the geometric details of an image. This capability is crucial for applications in areas such as robotics, medical image a...
2024-12-11T19:12:13Z
33 pages, 22 figures, 5 tables, 7 algorithms
null
null
Euclid: Supercharging Multimodal LLMs with Synthetic High-Fidelity Visual Descriptions
['Jiarui Zhang', 'Ollie Liu', 'Tianyu Yu', 'Jinyi Hu', 'W. Neiswanger']
2,024
arXiv.org
4
0
['Computer Science']
2,412.08746
DocVLM: Make Your VLM an Efficient Reader
['Mor Shpigel Nacson', 'Aviad Aberdam', 'Roy Ganz', 'Elad Ben Avraham', 'Alona Golts', 'Yair Kittenplon', 'Shai Mazor', 'Ron Litman']
['cs.CV', 'cs.LG']
Vision-Language Models (VLMs) excel in diverse visual tasks but face challenges in document understanding, which requires fine-grained text processing. While typical visual tasks perform well with low-resolution inputs, reading-intensive applications demand high-resolution, resulting in significant computational overhe...
2024-12-11T19:35:06Z
null
null
null
null
null
null
null
null
null
null
2,412.08774
ProtoOcc: Accurate, Efficient 3D Occupancy Prediction Using Dual Branch Encoder-Prototype Query Decoder
['Jungho Kim', 'Changwon Kang', 'Dongyoung Lee', 'Sehwan Choi', 'Jun Won Choi']
['cs.CV']
In this paper, we introduce ProtoOcc, a novel 3D occupancy prediction model designed to predict the occupancy states and semantic classes of 3D voxels through a deep semantic understanding of scenes. ProtoOcc consists of two main components: the Dual Branch Encoder (DBE) and the Prototype Query Decoder (PQD). The DBE p...
2024-12-11T20:55:21Z
Accepted to AAAI Conference on Artificial Intelligence 2025, 15 pages, 9 figures
null
null
null
null
null
null
null
null
null
2,412.08781
GMem: A Modular Approach for Ultra-Efficient Generative Models
['Yi Tang', 'Peng Sun', 'Zhenglin Cheng', 'Tao Lin']
['cs.CV', 'cs.LG']
Recent studies indicate that the denoising process in deep generative diffusion models implicitly learns and memorizes semantic information from the data distribution. These findings suggest that capturing more complex data distributions requires larger neural networks, leading to a substantial increase in computationa...
2024-12-11T21:23:24Z
9 pages, 5 figures, 3 tables
null
null
null
null
null
null
null
null
null
2,412.08802
jina-clip-v2: Multilingual Multimodal Embeddings for Text and Images
['Andreas Koukounas', 'Georgios Mastrapas', 'Sedigheh Eslami', 'Bo Wang', 'Mohammad Kalim Akram', 'Michael Günther', 'Isabelle Mohr', 'Saba Sturua', 'Nan Wang', 'Han Xiao']
['cs.CL', 'cs.CV', 'cs.IR', '68T50', 'I.2.7; I.2.10']
Contrastive Language-Image Pretraining (CLIP) has been widely used for crossmodal information retrieval and multimodal understanding tasks. However, CLIP models are mainly optimized for crossmodal vision-language tasks and underperform in single-mode text tasks. Moreover, these models are often trained on English datas...
2024-12-11T22:28:12Z
30 pages, 1-10 main paper, 10-12 refs, 12-30 benchmarks
null
null
jina-clip-v2: Multilingual Multimodal Embeddings for Text and Images
['Andreas Koukounas', 'Georgios Mastrapas', 'Bo Wang', 'Mohammad Kalim Akram', 'Sedigheh Eslami', 'Michael Gunther', 'Isabelle Mohr', 'Saba Sturua', 'Scott Martens', 'Nan Wang', 'Han Xiao']
2,024
arXiv.org
10
48
['Computer Science']
2,412.08864
A Graph-Based Synthetic Data Pipeline for Scaling High-Quality Reasoning Instructions
['Jiankang Wang', 'Jianjun Xu', 'Xiaorui Wang', 'Yuxin Wang', 'Mengting Xing', 'Shancheng Fang', 'Zhineng Chen', 'Hongtao Xie', 'Yongdong Zhang']
['cs.CL']
Synthesizing high-quality reasoning data for continual training has been proven to be effective in enhancing the performance of Large Language Models (LLMs). However, previous synthetic approaches struggle to easily scale up data and incur high costs in the pursuit of high quality. In this paper, we propose the Graph-b...
2024-12-12T01:52:25Z
null
null
null
A Graph-Based Synthetic Data Pipeline for Scaling High-Quality Reasoning Instructions
['Jiankang Wang', 'Jianjun Xu', 'Xiaorui Wang', 'Yuxin Wang', 'Mengting Xing', 'Shancheng Fang', 'Zhineng Chen', 'Hongtao Xie', 'Yongdong Zhang']
2,024
arXiv.org
1
46
['Computer Science']
2,412.08905
Phi-4 Technical Report
['Marah Abdin', 'Jyoti Aneja', 'Harkirat Behl', 'Sébastien Bubeck', 'Ronen Eldan', 'Suriya Gunasekar', 'Michael Harrison', 'Russell J. Hewett', 'Mojan Javaheripi', 'Piero Kauffmann', 'James R. Lee', 'Yin Tat Lee', 'Yuanzhi Li', 'Weishung Liu', 'Caio C. T. Mendes', 'Anh Nguyen', 'Eric Price', 'Gustavo de Rosa', 'Olli Sa...
['cs.CL', 'cs.AI']
We present phi-4, a 14-billion parameter language model developed with a training recipe that is centrally focused on data quality. Unlike most language models, where pre-training is based primarily on organic data sources such as web content or code, phi-4 strategically incorporates synthetic data throughout the train...
2024-12-12T03:37:41Z
null
null
null
null
null
null
null
null
null
null
2,412.09013
Arbitrary-steps Image Super-resolution via Diffusion Inversion
['Zongsheng Yue', 'Kang Liao', 'Chen Change Loy']
['cs.CV', 'NA', 'I.4.3']
This study presents a new image super-resolution (SR) technique based on diffusion inversion, aiming at harnessing the rich image priors encapsulated in large pre-trained diffusion models to improve SR performance. We design a Partial noise Prediction strategy to construct an intermediate state of the diffusion model, ...
2024-12-12T07:24:13Z
Accepted by CVPR 2025. Project: https://github.com/zsyOAOA/InvSR
null
null
null
null
null
null
null
null
null
2,412.09025
Shiksha: A Technical Domain focused Translation Dataset and Model for Indian Languages
['Advait Joglekar', 'Srinivasan Umesh']
['cs.CL', 'cs.AI']
Neural Machine Translation (NMT) models are typically trained on datasets with limited exposure to Scientific, Technical and Educational domains. Translation models thus, in general, struggle with tasks that involve scientific understanding or technical jargon. Their performance is found to be even worse for low-resour...
2024-12-12T07:40:55Z
null
null
null
null
null
null
null
null
null
null
2,412.09262
LatentSync: Taming Audio-Conditioned Latent Diffusion Models for Lip Sync with SyncNet Supervision
['Chunyu Li', 'Chao Zhang', 'Weikai Xu', 'Jingyu Lin', 'Jinghui Xie', 'Weiguo Feng', 'Bingyue Peng', 'Cunjian Chen', 'Weiwei Xing']
['cs.CV']
End-to-end audio-conditioned latent diffusion models (LDMs) have been widely adopted for audio-driven portrait animation, demonstrating their effectiveness in generating lifelike and high-resolution talking videos. However, direct application of audio-conditioned LDMs to lip-synchronization (lip-sync) tasks results in ...
2024-12-12T13:20:52Z
null
null
null
null
null
null
null
null
null
null
2,412.09349
DisPose: Disentangling Pose Guidance for Controllable Human Image Animation
['Hongxiang Li', 'Yaowei Li', 'Yuhang Yang', 'Junjie Cao', 'Zhihong Zhu', 'Xuxin Cheng', 'Long Chen']
['cs.CV']
Controllable human image animation aims to generate videos from reference images using driving videos. Due to the limited control signals provided by sparse guidance (e.g., skeleton pose), recent works have attempted to introduce additional dense conditions (e.g., depth map) to ensure motion alignment. However, such st...
2024-12-12T15:15:59Z
ICLR 2025
null
null
DisPose: Disentangling Pose Guidance for Controllable Human Image Animation
['Hongxiang Li', 'Yaowei Li', 'Yuhang Yang', 'Junjie Cao', 'Zhihong Zhu', 'Xuxin Cheng', 'Long Chen']
2,024
International Conference on Learning Representations
12
54
['Computer Science']
2,412.0937
Word Sense Linking: Disambiguating Outside the Sandbox
['Andrei Stefan Bejgu', 'Edoardo Barba', 'Luigi Procopio', 'Alberte Fernández-Castro', 'Roberto Navigli']
['cs.CL', 'cs.AI']
Word Sense Disambiguation (WSD) is the task of associating a word in a given context with its most suitable meaning among a set of possible candidates. While the task has recently witnessed renewed interest, with systems achieving performances above the estimated inter-annotator agreement, at the time of writing it sti...
2024-12-12T15:38:34Z
null
Findings of the Association for Computational Linguistics ACL 2024, 2024, 14332-14347
10.18653/v1/2024.findings-acl.851
null
null
null
null
null
null
null
2,412.09401
SLAM3R: Real-Time Dense Scene Reconstruction from Monocular RGB Videos
['Yuzheng Liu', 'Siyan Dong', 'Shuzhe Wang', 'Yingda Yin', 'Yanchao Yang', 'Qingnan Fan', 'Baoquan Chen']
['cs.CV']
In this paper, we introduce SLAM3R, a novel and effective system for real-time, high-quality, dense 3D reconstruction using RGB videos. SLAM3R provides an end-to-end solution by seamlessly integrating local 3D reconstruction and global coordinate registration through feed-forward neural networks. Given an input video, ...
2024-12-12T16:08:03Z
CVPR 2025
null
null
SLAM3R: Real-Time Dense Scene Reconstruction from Monocular RGB Videos
['Yuzheng Liu', 'Siyan Dong', 'Shuzhe Wang', 'Yanchao Yang', 'Qingnan Fan', 'Baoquan Chen']
2,024
arXiv.org
8
79
['Computer Science']
2,412.09405
Learned Compression for Compressed Learning
['Dan Jacobellis', 'Neeraja J. Yadwadkar']
['eess.IV', 'cs.CV', 'cs.LG', 'eess.AS', 'eess.SP']
Modern sensors produce increasingly rich streams of high-resolution data. Due to resource constraints, machine learning systems discard the vast majority of this information via resolution reduction. Compressed-domain learning allows models to operate on compact latent representations, allowing higher effective resolut...
2024-12-12T16:09:57Z
Accepted as paper to 2025 IEEE Data Compression Conference
null
null
null
null
null
null
null
null
null
2,412.09413
Imitate, Explore, and Self-Improve: A Reproduction Report on Slow-thinking Reasoning Systems
['Yingqian Min', 'Zhipeng Chen', 'Jinhao Jiang', 'Jie Chen', 'Jia Deng', 'Yiwen Hu', 'Yiru Tang', 'Jiapeng Wang', 'Xiaoxue Cheng', 'Huatong Song', 'Wayne Xin Zhao', 'Zheng Liu', 'Zhongyuan Wang', 'Ji-Rong Wen']
['cs.AI', 'cs.CL']
Recently, slow-thinking reasoning systems, such as o1, have demonstrated remarkable capabilities in solving complex reasoning tasks. These systems typically engage in an extended thinking process before responding to a query, allowing them to generate more thorough, accurate, and well-reasoned solutions. These systems ...
2024-12-12T16:20:36Z
Technical Report on Slow Thinking with LLMs: Part II
null
null
null
null
null
null
null
null
null
2,412.0956
Foundational Large Language Models for Materials Research
['Vaibhav Mishra', 'Somaditya Singh', 'Dhruv Ahlawat', 'Mohd Zaki', 'Vaibhav Bihani', 'Hargun Singh Grover', 'Biswajit Mishra', 'Santiago Miret', 'Mausam', 'N. M. Anoop Krishnan']
['cond-mat.mtrl-sci', 'cs.CL', 'cs.IR']
Materials discovery and development are critical for addressing global challenges. Yet, the exponential growth in materials science literature comprising vast amounts of textual data has created significant bottlenecks in knowledge extraction, synthesis, and scientific reasoning. Large Language Models (LLMs) offer unpr...
2024-12-12T18:46:38Z
null
null
null
null
null
null
null
null
null
null
2,412.09573
FreeSplatter: Pose-free Gaussian Splatting for Sparse-view 3D Reconstruction
['Jiale Xu', 'Shenghua Gao', 'Ying Shan']
['cs.CV']
Existing sparse-view reconstruction models heavily rely on accurate known camera poses. However, deriving camera extrinsics and intrinsics from sparse-view images presents significant challenges. In this work, we present FreeSplatter, a highly scalable, feed-forward reconstruction framework capable of generating high-q...
2024-12-12T18:52:53Z
Project page: https://bluestyle97.github.io/projects/freesplatter/
null
null
null
null
null
null
null
null
null
2,412.09593
Neural LightRig: Unlocking Accurate Object Normal and Material Estimation with Multi-Light Diffusion
['Zexin He', 'Tengfei Wang', 'Xin Huang', 'Xingang Pan', 'Ziwei Liu']
['cs.CV']
Recovering the geometry and materials of objects from a single image is challenging due to its under-constrained nature. In this paper, we present Neural LightRig, a novel framework that boosts intrinsic estimation by leveraging auxiliary multi-lighting conditions from 2D diffusion priors. Specifically, 1) we first lev...
2024-12-12T18:58:09Z
Project page: https://projects.zxhezexin.com/neural-lightrig
null
null
null
null
null
null
null
null
null
2,412.09596
InternLM-XComposer2.5-OmniLive: A Comprehensive Multimodal System for Long-term Streaming Video and Audio Interactions
['Pan Zhang', 'Xiaoyi Dong', 'Yuhang Cao', 'Yuhang Zang', 'Rui Qian', 'Xilin Wei', 'Lin Chen', 'Yifei Li', 'Junbo Niu', 'Shuangrui Ding', 'Qipeng Guo', 'Haodong Duan', 'Xin Chen', 'Han Lv', 'Zheng Nie', 'Min Zhang', 'Bin Wang', 'Wenwei Zhang', 'Xinyue Zhang', 'Jiaye Ge', 'Wei Li', 'Jingwen Li', 'Zhongying Tu', 'Conghui...
['cs.CV', 'cs.AI', 'cs.CL']
Creating AI systems that can interact with environments over long periods, similar to human cognition, has been a longstanding research goal. Recent advancements in multimodal large language models (MLLMs) have made significant strides in open-world understanding. However, the challenge of continuous and simultaneous s...
2024-12-12T18:58:30Z
Github Repo: https://github.com/InternLM/InternLM-XComposer/tree/main/InternLM-XComposer-2.5-OmniLive
null
null
null
null
null
null
null
null
null
2,412.09602
Hidden Biases of End-to-End Driving Datasets
['Julian Zimmerlin', 'Jens Beißwenger', 'Bernhard Jaeger', 'Andreas Geiger', 'Kashyap Chitta']
['cs.CV', 'cs.AI', 'cs.LG', 'cs.RO']
End-to-end driving systems have made rapid progress, but have so far not been applied to the challenging new CARLA Leaderboard 2.0. Further, while there is a large body of literature on end-to-end architectures and training strategies, the impact of the training dataset is often overlooked. In this work, we make a firs...
2024-12-12T18:59:13Z
Technical report for the CVPR 2024 Workshop on Foundation Models for Autonomous Systems. Runner-up of the track 'CARLA Autonomous Driving Challenge' in the 2024 Autonomous Grand Challenge (https://opendrivelab.com/challenge2024/)
null
null
Hidden Biases of End-to-End Driving Datasets
['Julian Zimmerlin', 'Jens Beisswenger', 'Bernhard Jaeger', 'Andreas Geiger', 'Kashyap Chitta']
2,024
arXiv.org
11
33
['Computer Science']
2,412.09605
AgentTrek: Agent Trajectory Synthesis via Guiding Replay with Web Tutorials
['Yiheng Xu', 'Dunjie Lu', 'Zhennan Shen', 'Junli Wang', 'Zekun Wang', 'Yuchen Mao', 'Caiming Xiong', 'Tao Yu']
['cs.CL']
Graphical User Interface (GUI) agents can automate complex tasks across digital environments, but their development is hindered by the scarcity of high-quality trajectory data for training. Existing approaches rely on expensive human annotation, making them unsustainable at scale. We propose AgentTrek, a scalable data ...
2024-12-12T18:59:27Z
ICLR2025 Spotlight https://agenttrek.github.io
null
null
null
null
null
null
null
null
null
2,412.09612
Olympus: A Universal Task Router for Computer Vision Tasks
['Yuanze Lin', 'Yunsheng Li', 'Dongdong Chen', 'Weijian Xu', 'Ronald Clark', 'Philip H. S. Torr']
['cs.CV', 'cs.AI', 'cs.CL']
We introduce Olympus, a new approach that transforms Multimodal Large Language Models (MLLMs) into a unified framework capable of handling a wide array of computer vision tasks. Utilizing a controller MLLM, Olympus delegates over 20 specialized tasks across images, videos, and 3D objects to dedicated modules. This inst...
2024-12-12T18:59:40Z
Accepted to CVPR 2025, Project webpage: http://yuanze-lin.me/Olympus_page/
null
null
Olympus: A Universal Task Router for Computer Vision Tasks
['Yuanze Lin', 'Yunsheng Li', 'Dongdong Chen', 'Weijian Xu', 'Ronald Clark', 'Philip Torr']
2,024
arXiv.org
1
94
['Computer Science']
2,412.09613
PVC: Progressive Visual Token Compression for Unified Image and Video Processing in Large Vision-Language Models
['Chenyu Yang', 'Xuan Dong', 'Xizhou Zhu', 'Weijie Su', 'Jiahao Wang', 'Hao Tian', 'Zhe Chen', 'Wenhai Wang', 'Lewei Lu', 'Jifeng Dai']
['cs.CV']
Large Vision-Language Models (VLMs) have been extended to understand both images and videos. Visual token compression is leveraged to reduce the considerable token length of visual inputs. To meet the needs of different tasks, existing high-performance models usually process images and videos separately with different ...
2024-12-12T18:59:40Z
null
null
null
PVC: Progressive Visual Token Compression for Unified Image and Video Processing in Large Vision-Language Models
['Chenyu Yang', 'Xuan Dong', 'Xizhou Zhu', 'Weijie Su', 'Jiahao Wang', 'Hao Tian', 'Zhe Chen', 'Wenhai Wang', 'Lewei Lu', 'Jifeng Dai']
2,024
arXiv.org
4
0
['Computer Science']
2,412.09616
V2PE: Improving Multimodal Long-Context Capability of Vision-Language Models with Variable Visual Position Encoding
['Junqi Ge', 'Ziyi Chen', 'Jintao Lin', 'Jinguo Zhu', 'Xihui Liu', 'Jifeng Dai', 'Xizhou Zhu']
['cs.CV']
Vision-Language Models (VLMs) have shown promising capabilities in handling various multimodal tasks, yet they struggle in long-context scenarios, particularly in tasks involving videos, high-resolution images, or lengthy image-text documents. In our work, we first conduct an empirical analysis of the long-context capa...
2024-12-12T18:59:46Z
The code and models will be available at https://github.com/OpenGVLab/V2PE
null
null
V2PE: Improving Multimodal Long-Context Capability of Vision-Language Models with Variable Visual Position Encoding
['Junqi Ge', 'Ziyi Chen', 'Jintao Lin', 'Jinguo Zhu', 'Xihui Liu', 'Jifeng Dai', 'Xizhou Zhu']
2,024
arXiv.org
7
0
['Computer Science']
2,412.09618
EasyRef: Omni-Generalized Group Image Reference for Diffusion Models via Multimodal LLM
['Zhuofan Zong', 'Dongzhi Jiang', 'Bingqi Ma', 'Guanglu Song', 'Hao Shao', 'Dazhong Shen', 'Yu Liu', 'Hongsheng Li']
['cs.CV']
Significant achievements in personalization of diffusion models have been witnessed. Conventional tuning-free methods mostly encode multiple reference images by averaging their image embeddings as the injection condition, but such an image-independent operation cannot perform interaction among images to capture consist...
2024-12-12T18:59:48Z
Tech report
null
null
EasyRef: Omni-Generalized Group Image Reference for Diffusion Models via Multimodal LLM
['Zhuofan Zong', 'Dongzhi Jiang', 'Bingqi Ma', 'Guanglu Song', 'Hao Shao', 'Dazhong Shen', 'Yu Liu', 'Hongsheng Li']
2,024
arXiv.org
8
0
['Computer Science']
2,412.0962
Learning Camera Movement Control from Real-World Drone Videos
['Yunzhong Hou', 'Liang Zheng', 'Philip Torr']
['cs.CV', 'cs.RO']
This study seeks to automate camera movement control for filming existing subjects into attractive videos, contrasting with the creation of non-existent content by directly generating the pixels. We select drone videos as our test case due to their rich and challenging motion patterns, distinctive viewing angles, and p...
2024-12-12T18:59:54Z
null
null
null
Learning Camera Movement Control from Real-World Drone Videos
['Yunzhong Hou', 'Liang Zheng', 'Philip H. S. Torr']
2,024
arXiv.org
4
0
['Computer Science']
2,412.09624
GenEx: Generating an Explorable World
['Taiming Lu', 'Tianmin Shu', 'Junfei Xiao', 'Luoxin Ye', 'Jiahao Wang', 'Cheng Peng', 'Chen Wei', 'Daniel Khashabi', 'Rama Chellappa', 'Alan Yuille', 'Jieneng Chen']
['cs.CV', 'cs.RO']
Understanding, navigating, and exploring the 3D physical real world has long been a central challenge in the development of artificial intelligence. In this work, we take a step toward this goal by introducing GenEx, a system capable of planning complex embodied world exploration, guided by its generative imagination t...
2024-12-12T18:59:57Z
Website: GenEx.world
null
null
GenEx: Generating an Explorable World
['Taiming Lu', 'Tianmin Shu', 'Junfei Xiao', 'Luoxin Ye', 'Jiahao Wang', 'Cheng Peng', 'Chen Wei', 'Daniel Khashabi', 'Rama Chellappa', 'Alan L. Yuille', 'Jieneng Chen']
2,024
arXiv.org
5
24
['Computer Science']
2,412.09754
ViCaS: A Dataset for Combining Holistic and Pixel-level Video Understanding using Captions with Grounded Segmentation
['Ali Athar', 'Xueqing Deng', 'Liang-Chieh Chen']
['cs.CV']
Recent advances in multimodal large language models (MLLMs) have expanded research in video understanding, primarily focusing on high-level tasks such as video captioning and question-answering. Meanwhile, a smaller body of work addresses dense, pixel-precise segmentation tasks, which typically involve category-guided ...
2024-12-12T23:10:54Z
Accepted to CVPR 2025. Project page: https://ali2500.github.io/vicas-project/
null
null
ViCaS: A Dataset for Combining Holistic and Pixel-level Video Understanding using Captions with Grounded Segmentation
['Ali Athar', 'Xueqing Deng', 'Liang-Chieh Chen']
2,024
arXiv.org
5
131
['Computer Science']
2,412.09818
MERaLiON-AudioLLM: Bridging Audio and Language with Large Language Models
['Yingxu He', 'Zhuohan Liu', 'Shuo Sun', 'Bin Wang', 'Wenyu Zhang', 'Xunlong Zou', 'Nancy F. Chen', 'Ai Ti Aw']
['cs.CL', 'cs.AI']
We introduce MERaLiON-AudioLLM (Multimodal Empathetic Reasoning and Learning in One Network), the first speech-text model tailored for Singapore's multilingual and multicultural landscape. Developed under the National Large Language Models Funding Initiative, Singapore, MERaLiON-AudioLLM integrates advanced speech and ...
2024-12-13T03:15:05Z
https://huggingface.co/MERaLiON/MERaLiON-AudioLLM-Whisper-SEA-LION
null
null
null
null
null
null
null
null
null