arxiv_id
float64
1.5k
2.51k
title
stringlengths
9
178
authors
stringlengths
2
22.8k
categories
stringlengths
4
146
summary
stringlengths
103
1.92k
published
stringdate
2015-02-06 10:44:00
2025-07-10 17:59:58
comments
stringlengths
2
417
journal_ref
stringclasses
321 values
doi
stringclasses
398 values
ss_title
stringlengths
8
159
ss_authors
stringlengths
11
8.38k
ss_year
float64
2.02k
2.03k
ss_venue
stringclasses
281 values
ss_citationCount
float64
0
134k
ss_referenceCount
float64
0
429
ss_fieldsOfStudy
stringclasses
47 values
2,412.17153
Distilled Decoding 1: One-step Sampling of Image Auto-regressive Models with Flow Matching
['Enshu Liu', 'Xuefei Ning', 'Yu Wang', 'Zinan Lin']
['cs.CV', 'cs.LG']
Autoregressive (AR) models have achieved state-of-the-art performance in text and image generation but suffer from slow generation due to the token-by-token process. We ask an ambitious question: can a pre-trained AR model be adapted to generate outputs in just one or two steps? If successful, this would significantly ...
2024-12-22T20:21:54Z
null
null
null
null
null
null
null
null
null
null
2,412.17295
Friends-MMC: A Dataset for Multi-modal Multi-party Conversation Understanding
['Yueqian Wang', 'Xiaojun Meng', 'Yuxuan Wang', 'Jianxin Liang', 'Qun Liu', 'Dongyan Zhao']
['cs.CL']
Multi-modal multi-party conversation (MMC) is a less studied yet important topic of research due to that it well fits real-world scenarios and thus potentially has more widely-used applications. Compared with the traditional multi-modal conversations, MMC requires stronger character-centered understanding abilities as ...
2024-12-23T05:32:48Z
Published at AAAI 2025
null
null
Friends-MMC: A Dataset for Multi-modal Multi-party Conversation Understanding
['Yueqian Wang', 'Xiaojun Meng', 'Yuxuan Wang', 'Jianxin Liang', 'Qun Liu', 'Dongyan Zhao']
2,024
AAAI Conference on Artificial Intelligence
1
35
['Computer Science']
2,412.17364
Efficient fine-tuning methodology of text embedding models for information retrieval: contrastive learning penalty (clp)
['Jeongsu Yu']
['cs.IR', 'cs.AI', '68T50, 68P20', 'H.3.3; I.2.7']
Text embedding models play a crucial role in natural language processing, particularly in information retrieval, and their importance is further highlighted with the recent utilization of RAG (Retrieval- Augmented Generation). This study presents an efficient fine-tuning methodology encompassing data selection, loss fu...
2024-12-23T07:55:22Z
null
null
null
null
null
null
null
null
null
null
2,412.17395
WarriorCoder: Learning from Expert Battles to Augment Code Large Language Models
['Huawen Feng', 'Pu Zhao', 'Qingfeng Sun', 'Can Xu', 'Fangkai Yang', 'Lu Wang', 'Qianli Ma', 'Qingwei Lin', 'Saravan Rajmohan', 'Dongmei Zhang', 'Qi Zhang']
['cs.CL']
Despite recent progress achieved by code large language models (LLMs), their remarkable abilities are largely dependent on fine-tuning on the high-quality data, posing challenges for data collection and annotation. To address this, current methods often design various data flywheels to collect complex code instructions...
2024-12-23T08:47:42Z
null
null
null
null
null
null
null
null
null
null
2,412.17417
Multimodal Preference Data Synthetic Alignment with Reward Model
['Robert Wijaya', 'Ngoc-Bao Nguyen', 'Ngai-Man Cheung']
['cs.CV']
Multimodal large language models (MLLMs) have significantly advanced tasks like caption generation and visual question answering by integrating visual and textual data. However, they sometimes produce misleading or hallucinate content due to discrepancies between their pre-training data and real user prompts. Existing ...
2024-12-23T09:29:40Z
Project Page: https://pds-dpo.github.io/
null
null
null
null
null
null
null
null
null
2,412.17449
Applying LLM and Topic Modelling in Psychotherapeutic Contexts
['Alexander Vanin', 'Vadim Bolshev', 'Anastasia Panfilova']
['cs.LG', 'cs.AI', 'I.2.7, J.4']
This study explores the use of Large language models to analyze therapist remarks in a psychotherapeutic setting. The paper focuses on the application of BERTopic, a machine learning-based topic modeling tool, to the dialogue of two different groups of therapists (classical and modern), which makes it possible to ident...
2024-12-23T10:14:32Z
18 pages, 4 figures
null
null
null
null
null
null
null
null
null
2,412.17498
DRT: Deep Reasoning Translation via Long Chain-of-Thought
['Jiaan Wang', 'Fandong Meng', 'Yunlong Liang', 'Jie Zhou']
['cs.CL', 'cs.AI']
Recently, O1-like models have emerged as representative examples, illustrating the effectiveness of long chain-of-thought (CoT) in reasoning tasks such as math and coding tasks. In this paper, we introduce DRT, an attempt to bring the success of long CoT to neural machine translation (MT). Specifically, in view of the ...
2024-12-23T11:55:33Z
null
null
null
null
null
null
null
null
null
null
2,412.17596
LiveIdeaBench: Evaluating LLMs' Divergent Thinking for Scientific Idea Generation with Minimal Context
['Kai Ruan', 'Xuan Wang', 'Jixiang Hong', 'Peng Wang', 'Yang Liu', 'Hao Sun']
['cs.CL', 'cs.AI']
While Large Language Models (LLMs) demonstrate remarkable capabilities in scientific tasks such as literature analysis and experimental design (e.g., accurately extracting key findings from papers or generating coherent experimental procedures), existing evaluation benchmarks primarily assess performance using rich con...
2024-12-23T14:13:44Z
Updated manuscript and title
null
null
null
null
null
null
null
null
null
2,412.17606
SBS Figures: Pre-training Figure QA from Stage-by-Stage Synthesized Images
['Risa Shinoda', 'Kuniaki Saito', 'Shohei Tanaka', 'Tosho Hirasawa', 'Yoshitaka Ushiku']
['cs.CV']
Building a large-scale figure QA dataset requires a considerable amount of work, from gathering and selecting figures to extracting attributes like text, numbers, and colors, and generating QAs. Although recent developments in LLMs have led to efforts to synthesize figures, most of these focus primarily on QA generatio...
2024-12-23T14:25:33Z
AAAI-25 Workshop on Document Understanding and Intelligence. Dataset and code: https://github.com/omron-sinicx/SBSFigures
null
null
null
null
null
null
null
null
null
2,412.17644
DreamFit: Garment-Centric Human Generation via a Lightweight Anything-Dressing Encoder
['Ente Lin', 'Xujie Zhang', 'Fuwei Zhao', 'Yuxuan Luo', 'Xin Dong', 'Long Zeng', 'Xiaodan Liang']
['cs.CV']
Diffusion models for garment-centric human generation from text or image prompts have garnered emerging attention for their great application potential. However, existing methods often face a dilemma: lightweight approaches, such as adapters, are prone to generate inconsistent textures; while finetune-based methods inv...
2024-12-23T15:21:28Z
Accepted at AAAI 2025
null
null
null
null
null
null
null
null
null
2,412.17726
VidTwin: Video VAE with Decoupled Structure and Dynamics
['Yuchi Wang', 'Junliang Guo', 'Xinyi Xie', 'Tianyu He', 'Xu Sun', 'Jiang Bian']
['cs.CV', 'cs.AI', 'cs.LG']
Recent advancements in video autoencoders (Video AEs) have significantly improved the quality and efficiency of video generation. In this paper, we propose a novel and compact video autoencoder, VidTwin, that decouples video into two distinct latent spaces: Structure latent vectors, which capture overall content and gl...
2024-12-23T17:16:58Z
Accepted by CVPR 2025; Project page: https://vidtwin.github.io/; Code: https://github.com/microsoft/VidTok/tree/main/vidtwin
null
null
null
null
null
null
null
null
null
2,412.17743
YuLan-Mini: An Open Data-efficient Language Model
['Yiwen Hu', 'Huatong Song', 'Jia Deng', 'Jiapeng Wang', 'Jie Chen', 'Kun Zhou', 'Yutao Zhu', 'Jinhao Jiang', 'Zican Dong', 'Wayne Xin Zhao', 'Ji-Rong Wen']
['cs.CL']
Effective pre-training of large language models (LLMs) has been challenging due to the immense resource demands and the complexity of the technical processes involved. This paper presents a detailed technical report on YuLan-Mini, a highly capable base model with 2.42B parameters that achieves top-tier performance amon...
2024-12-23T17:47:53Z
null
null
null
null
null
null
null
null
null
null
2,412.17762
The Superposition of Diffusion Models Using the Itô Density Estimator
['Marta Skreta', 'Lazar Atanackovic', 'Avishek Joey Bose', 'Alexander Tong', 'Kirill Neklyudov']
['cs.LG']
The Cambrian explosion of easily accessible pre-trained diffusion models suggests a demand for methods that combine multiple different pre-trained diffusion models without incurring the significant computational burden of re-training a larger combined model. In this paper, we cast the problem of combining multiple pre-...
2024-12-23T18:18:07Z
Accepted as a Spotlight Presentation at the International Conference on Learning Representations 2025
null
null
The Superposition of Diffusion Models Using the Itô Density Estimator
['Marta Skreta', 'Lazar Atanackovic', 'A. Bose', 'Alexander Tong', 'Kirill Neklyudov']
2,024
arXiv.org
12
0
['Computer Science']
2,412.1778
PepTune: De Novo Generation of Therapeutic Peptides with Multi-Objective-Guided Discrete Diffusion
['Sophia Tang', 'Yinuo Zhang', 'Pranam Chatterjee']
['q-bio.BM', 'cs.AI']
We present PepTune, a multi-objective discrete diffusion model for simultaneous generation and optimization of therapeutic peptide SMILES. Built on the Masked Discrete Language Model (MDLM) framework, PepTune ensures valid peptide structures with a novel bond-dependent masking schedule and invalid loss function. To gui...
2024-12-23T18:38:49Z
Published at ICML 2025. (Proceedings of the 42nd International Conference on Machine Learning, Vancouver, Canada)
null
null
null
null
null
null
null
null
null
2,412.178
Comprehensive Multi-Modal Prototypes are Simple and Effective Classifiers for Vast-Vocabulary Object Detection
['Yitong Chen', 'Wenhao Yao', 'Lingchen Meng', 'Sihong Wu', 'Zuxuan Wu', 'Yu-Gang Jiang']
['cs.CV']
Enabling models to recognize vast open-world categories has been a longstanding pursuit in object detection. By leveraging the generalization capabilities of vision-language models, current open-world detectors can recognize a broader range of vocabularies, despite being trained on limited categories. However, when the...
2024-12-23T18:57:43Z
Code is available at https://github.com/Row11n/Prova/tree/main
null
null
null
null
null
null
null
null
null
2,412.17933
BenCzechMark : A Czech-centric Multitask and Multimetric Benchmark for Large Language Models with Duel Scoring Mechanism
['Martin Fajcik', 'Martin Docekal', 'Jan Dolezal', 'Karel Ondrej', 'Karel Beneš', 'Jan Kapsa', 'Pavel Smrz', 'Alexander Polok', 'Michal Hradis', 'Zuzana Neverilova', 'Ales Horak', 'Radoslav Sabol', 'Michal Stefanik', 'Adam Jirkovsky', 'David Adamczyk', 'Petr Hyner', 'Jan Hula', 'Hynek Kydlicek']
['cs.CL', 'cs.AI']
We present BenCzechMark (BCM), the first comprehensive Czech language benchmark designed for large language models, offering diverse tasks, multiple task formats, and multiple evaluation metrics. Its duel scoring system is grounded in statistical significance theory and uses aggregation across tasks inspired by social ...
2024-12-23T19:45:20Z
Accepted to TACL
null
null
BenCzechMark : A Czech-centric Multitask and Multimetric Benchmark for Large Language Models with Duel Scoring Mechanism
['Martin Fajcik', 'Martin Docekal', 'Jan Dolezal', 'Karel Ondrej', 'Karel Benevs', 'Jan Kapsa', 'Pavel Smrz', 'Alexander Polok', 'Michal Hradis', 'Zuzana Neverilova', 'Aleš Horák', 'Radoslav Sabol', 'Michal Stefanik', 'Adam Jirkovský', 'D. Adamczyk', 'Petr Hyner', 'Jan Hula', 'Hynek Kydlícek']
2,024
arXiv.org
2
91
['Computer Science']
2,412.18148
Are We in the AI-Generated Text World Already? Quantifying and Monitoring AIGT on Social Media
['Zhen Sun', 'Zongmin Zhang', 'Xinyue Shen', 'Ziyi Zhang', 'Yule Liu', 'Michael Backes', 'Yang Zhang', 'Xinlei He']
['cs.AI', 'cs.CL', 'cs.CR', 'cs.SI']
Social media platforms are experiencing a growing presence of AI-Generated Texts (AIGTs). However, the misuse of AIGTs could have profound implications for public opinion, such as spreading misinformation and manipulating narratives. Despite its importance, it remains unclear how prevalent AIGTs are on social media. To...
2024-12-24T04:04:54Z
Accepted at ACL 2025 Main Conference. 29 pages, 21 figures, 12 tables
null
null
Are We in the AI-Generated Text World Already? Quantifying and Monitoring AIGT on Social Media
['Zhen Sun', 'Zongmin Zhang', 'Xinyue Shen', 'Ziyi Zhang', 'Yule Liu', 'Michael Backes', 'Yang Zhang', 'Xinlei He']
2,024
arXiv.org
8
60
['Computer Science']
2,412.18165
Parallel Neural Computing for Scene Understanding from LiDAR Perception in Autonomous Racing
['Suwesh Prasad Sah']
['cs.CV']
Autonomous driving in high-speed racing, as opposed to urban environments, presents significant challenges in scene understanding due to rapid changes in the track environment. Traditional sequential network approaches may struggle to meet the real-time knowledge and decision-making demands of an autonomous agent cover...
2024-12-24T04:56:32Z
IEEE/ISED 2024
12th International Conference on Intelligent Systems and Embedded Design (ISED-2024)
10.1109/ISED63599.2024.10956572
null
null
null
null
null
null
null
2,412.18319
Mulberry: Empowering MLLM with o1-like Reasoning and Reflection via Collective Monte Carlo Tree Search
['Huanjin Yao', 'Jiaxing Huang', 'Wenhao Wu', 'Jingyi Zhang', 'Yibo Wang', 'Shunyu Liu', 'Yingjie Wang', 'Yuxin Song', 'Haocheng Feng', 'Li Shen', 'Dacheng Tao']
['cs.CV', 'cs.AI']
In this work, we aim to develop an MLLM that understands and solves questions by learning to create each intermediate step of the reasoning involved till the final answer. To this end, we propose Collective Monte Carlo Tree Search (CoMCTS), a new learning-to-reason method for MLLMs, which introduces the concept of coll...
2024-12-24T10:07:51Z
Technical report
null
null
Mulberry: Empowering MLLM with o1-like Reasoning and Reflection via Collective Monte Carlo Tree Search
['Huanjin Yao', 'Jiaxing Huang', 'Wenhao Wu', 'Jingyi Zhang', 'Yibo Wang', 'Shunyu Liu', 'Yingjie Wang', 'Yuxin Song', 'Haocheng Feng', 'Li Shen', 'Dacheng Tao']
2,024
arXiv.org
54
79
['Computer Science']
2,412.1845
3DGraphLLM: Combining Semantic Graphs and Large Language Models for 3D Scene Understanding
['Tatiana Zemskova', 'Dmitry Yudin']
['cs.CV']
A 3D scene graph represents a compact scene model, storing information about the objects and the semantic relationships between them, making its use promising for robotic tasks. When interacting with a user, an embodied intelligent agent should be capable of responding to various queries about the scene formulated in n...
2024-12-24T14:21:58Z
null
null
null
3DGraphLLM: Combining Semantic Graphs and Large Language Models for 3D Scene Understanding
['T. Zemskova', 'Dmitry A. Yudin']
2,024
arXiv.org
4
0
['Computer Science']
2,412.18525
Explanatory Instructions: Towards Unified Vision Tasks Understanding and Zero-shot Generalization
['Yang Shen', 'Xiu-Shen Wei', 'Yifan Sun', 'Yuxin Song', 'Tao Yuan', 'Jian Jin', 'Heyang Xu', 'Yazhou Yao', 'Errui Ding']
['cs.CV']
Computer Vision (CV) has yet to fully achieve the zero-shot task generalization observed in Natural Language Processing (NLP), despite following many of the milestones established in NLP, such as large transformer models, extensive pre-training, and the auto-regression paradigm, among others. In this paper, we explore ...
2024-12-24T16:08:25Z
ICML'25, 44 pages
null
null
null
null
null
null
null
null
null
2,412.18552
Distilling Fine-grained Sentiment Understanding from Large Language Models
['Yice Zhang', 'Guangyu Xie', 'Hongling Xu', 'Kaiheng Hou', 'Jianzhu Bao', 'Qianlong Wang', 'Shiwei Chen', 'Ruifeng Xu']
['cs.CL']
Fine-grained sentiment analysis (FSA) aims to extract and summarize user opinions from vast opinionated text. Recent studies demonstrate that large language models (LLMs) possess exceptional sentiment understanding capabilities. However, directly deploying LLMs for FSA applications incurs high inference costs. Therefor...
2024-12-24T17:05:26Z
null
null
null
null
null
null
null
null
null
null
2,412.18565
3DEnhancer: Consistent Multi-View Diffusion for 3D Enhancement
['Yihang Luo', 'Shangchen Zhou', 'Yushi Lan', 'Xingang Pan', 'Chen Change Loy']
['cs.CV']
Despite advances in neural rendering, due to the scarcity of high-quality 3D datasets and the inherent limitations of multi-view diffusion models, view synthesis and 3D model generation are restricted to low resolutions with suboptimal multi-view consistency. In this study, we present a novel 3D enhancement pipeline, d...
2024-12-24T17:36:34Z
Project page: https://yihangluo.com/projects/3DEnhancer
null
null
3DEnhancer: Consistent Multi-View Diffusion for 3D Enhancement
['Yihang Luo', 'Shangchen Zhou', 'Yushi Lan', 'Xingang Pan', 'Chen Change Loy']
2,024
arXiv.org
0
95
['Computer Science']
2,412.18605
Orient Anything: Learning Robust Object Orientation Estimation from Rendering 3D Models
['Zehan Wang', 'Ziang Zhang', 'Tianyu Pang', 'Chao Du', 'Hengshuang Zhao', 'Zhou Zhao']
['cs.CV']
Orientation is a key attribute of objects, crucial for understanding their spatial pose and arrangement in images. However, practical solutions for accurate orientation estimation from a single image remain underexplored. In this work, we introduce Orient Anything, the first expert and foundational model designed to es...
2024-12-24T18:58:43Z
Project Page: https://orient-anything.github.io/
null
null
Orient Anything: Learning Robust Object Orientation Estimation from Rendering 3D Models
['Zehan Wang', 'Ziang Zhang', 'Tianyu Pang', 'Chao Du', 'Hengshuang Zhao', 'Zhou Zhao']
2,024
arXiv.org
10
0
['Computer Science']
2,412.18609
Video-Panda: Parameter-efficient Alignment for Encoder-free Video-Language Models
['Jinhui Yi', 'Syed Talal Wasim', 'Yanan Luo', 'Muzammal Naseer', 'Juergen Gall']
['cs.CV']
We present an efficient encoder-free approach for video-language understanding that achieves competitive performance while significantly reducing computational overhead. Current video-language models typically rely on heavyweight image encoders (300M-1.1B parameters) or video encoders (1B-1.4B parameters), creating a s...
2024-12-24T18:59:56Z
CVPR 2025 camera-ready version
null
null
Video-Panda: Parameter-efficient Alignment for Encoder-free Video-Language Models
['Jinhui Yi', 'Syed Talal Wasim', 'Yanan Luo', 'Muzammal Naseer', 'Juergen Gall']
2,024
arXiv.org
0
53
['Computer Science']
2,412.1886
Bootstrap Your Own Context Length
['Liang Wang', 'Nan Yang', 'Xingxing Zhang', 'Xiaolong Huang', 'Furu Wei']
['cs.CL', 'cs.IR']
We introduce a bootstrapping approach to train long-context language models by exploiting their short-context capabilities only. Our method utilizes a simple agent workflow to synthesize diverse long-context instruction tuning data, thereby eliminating the necessity for manual data collection and annotation. The propos...
2024-12-25T10:08:54Z
19 pages
null
null
null
null
null
null
null
null
null
2,412.18925
HuatuoGPT-o1, Towards Medical Complex Reasoning with LLMs
['Junying Chen', 'Zhenyang Cai', 'Ke Ji', 'Xidong Wang', 'Wanlong Liu', 'Rongsheng Wang', 'Jianye Hou', 'Benyou Wang']
['cs.CL', 'cs.AI', 'cs.LG']
The breakthrough of OpenAI o1 highlights the potential of enhancing reasoning to improve LLM. Yet, most research in reasoning has focused on mathematical tasks, leaving domains like medicine underexplored. The medical domain, though distinct from mathematics, also demands robust reasoning to provide reliable answers, g...
2024-12-25T15:12:34Z
null
null
null
null
null
null
null
null
null
null
2,412.18928
UNIC-Adapter: Unified Image-instruction Adapter with Multi-modal Transformer for Image Generation
['Lunhao Duan', 'Shanshan Zhao', 'Wenjun Yan', 'Yinglun Li', 'Qing-Guo Chen', 'Zhao Xu', 'Weihua Luo', 'Kaifu Zhang', 'Mingming Gong', 'Gui-Song Xia']
['cs.CV', 'cs.LG']
Recently, text-to-image generation models have achieved remarkable advancements, particularly with diffusion models facilitating high-quality image synthesis from textual descriptions. However, these models often struggle with achieving precise control over pixel-level layouts, object appearances, and global styles whe...
2024-12-25T15:19:02Z
null
null
null
null
null
null
null
null
null
null
2,412.18945
Single Trajectory Distillation for Accelerating Image and Video Style Transfer
['Sijie Xu', 'Runqi Wang', 'Wei Zhu', 'Dejia Song', 'Nemo Chen', 'Xu Tang', 'Yao Hu']
['cs.CV']
Diffusion-based stylization methods typically denoise from a specific partial noise state for image-to-image and video-to-video tasks. This multi-step diffusion process is computationally expensive and hinders real-world application. A promising solution to speed up the process is to obtain few-step consistency models ...
2024-12-25T16:40:23Z
null
null
null
Single Trajectory Distillation for Accelerating Image and Video Style Transfer
['Sijie Xu', 'Runqi Wang', 'Wei Zhu', 'Dejia Song', 'Nemo Chen', 'Xu Tang', 'Yao Hu']
2,024
arXiv.org
0
39
['Computer Science']
2,412.19048
Jasper and Stella: distillation of SOTA embedding models
['Dun Zhang', 'Jiacheng Li', 'Ziyang Zeng', 'Fulong Wang']
['cs.IR']
A crucial component in many deep learning applications, such as Frequently Asked Questions (FAQ) and Retrieval-Augmented Generation (RAG), is dense retrieval. In this process, embedding models transform raw text into numerical vectors. However, the embedding models that currently excel on text embedding benchmarks, lik...
2024-12-26T04:05:28Z
7 pages, 1 figure
null
null
Jasper and Stella: distillation of SOTA embedding models
['Dun Zhang', 'Jiacheng Li', 'Ziyang Zeng', 'Fulong Wang']
2,024
arXiv.org
35
0
['Computer Science']
2,412.19326
Task Preference Optimization: Improving Multimodal Large Language Models with Vision Task Alignment
['Ziang Yan', 'Zhilin Li', 'Yinan He', 'Chenting Wang', 'Kunchang Li', 'Xinhao Li', 'Xiangyu Zeng', 'Zilei Wang', 'Yali Wang', 'Yu Qiao', 'Limin Wang', 'Yi Wang']
['cs.CV']
Current multimodal large language models (MLLMs) struggle with fine-grained or precise understanding of visuals although they give comprehensive perception and reasoning in a spectrum of vision applications. Recent studies either develop tool-using or unify specific visual tasks into the autoregressive framework, often...
2024-12-26T18:56:05Z
CVPR2025
null
null
null
null
null
null
null
null
null
2,412.19412
MINIMA: Modality Invariant Image Matching
['Jiangwei Ren', 'Xingyu Jiang', 'Zizhuo Li', 'Dingkang Liang', 'Xin Zhou', 'Xiang Bai']
['cs.CV']
Image matching for both cross-view and cross-modality plays a critical role in multimodal perception. In practice, the modality gap caused by different imaging systems/styles poses great challenges to the matching task. Existing works try to extract invariant features for specific modalities and train on limited datase...
2024-12-27T02:39:50Z
Accepted to CVPR 2025. The dataset and code are available at https://github.com/LSXI7/MINIMA
null
null
null
null
null
null
null
null
null
2,412.19437
DeepSeek-V3 Technical Report
['DeepSeek-AI', 'Aixin Liu', 'Bei Feng', 'Bing Xue', 'Bingxuan Wang', 'Bochao Wu', 'Chengda Lu', 'Chenggang Zhao', 'Chengqi Deng', 'Chenyu Zhang', 'Chong Ruan', 'Damai Dai', 'Daya Guo', 'Dejian Yang', 'Deli Chen', 'Dongjie Ji', 'Erhang Li', 'Fangyun Lin', 'Fucong Dai', 'Fuli Luo', 'Guangbo Hao', 'Guanting Chen', 'Guowe...
['cs.CL', 'cs.AI']
We present DeepSeek-V3, a strong Mixture-of-Experts (MoE) language model with 671B total parameters with 37B activated for each token. To achieve efficient inference and cost-effective training, DeepSeek-V3 adopts Multi-head Latent Attention (MLA) and DeepSeekMoE architectures, which were thoroughly validated in DeepSe...
2024-12-27T04:03:16Z
null
null
null
DeepSeek-V3 Technical Report
['DeepSeek-AI', 'A. Liu', 'Bei Feng', 'Bing Xue', 'Bing-Li Wang', 'Bochao Wu', 'Chengda Lu', 'Chenggang Zhao', 'C. Deng', 'Chenyu Zhang', 'C. Ruan', 'Damai Dai', 'Daya Guo', 'Dejian Yang', 'Deli Chen', 'Dong-Li Ji', 'Erhang Li', 'Fangyun Lin', 'Fucong Dai', 'Fuli Luo', 'Guangbo Hao', 'Guanting Chen', 'Guowei Li', 'H. Z...
2,024
arXiv.org
821
0
['Computer Science']
2,412.19505
DrivingWorld: Constructing World Model for Autonomous Driving via Video GPT
['Xiaotao Hu', 'Wei Yin', 'Mingkai Jia', 'Junyuan Deng', 'Xiaoyang Guo', 'Qian Zhang', 'Xiaoxiao Long', 'Ping Tan']
['cs.CV']
Recent successes in autoregressive (AR) generation models, such as the GPT series in natural language processing, have motivated efforts to replicate this success in visual tasks. Some works attempt to extend this approach to autonomous driving by building video-based world models capable of generating realistic future...
2024-12-27T07:44:07Z
null
null
null
DrivingWorld: Constructing World Model for Autonomous Driving via Video GPT
['Xiaotao Hu', 'Wei Yin', 'Mingkai Jia', 'Junyuan Deng', 'Xiaoyang Guo', 'Qian Zhang', 'Xiaoxiao Long', 'Ping Tan']
2,024
arXiv.org
14
46
['Computer Science']
2,412.19628
RecConv: Efficient Recursive Convolutions for Multi-Frequency Representations
['Mingshu Zhao', 'Yi Luo', 'Yong Ouyang']
['cs.CV']
Recent advances in vision transformers (ViTs) have demonstrated the advantage of global modeling capabilities, prompting widespread integration of large-kernel convolutions for enlarging the effective receptive field (ERF). However, the quadratic scaling of parameter count and computational complexity (FLOPs) with resp...
2024-12-27T13:13:52Z
Tech report; Added supplementary material;
null
null
null
null
null
null
null
null
null
2,412.19637
ReNeg: Learning Negative Embedding with Reward Guidance
['Xiaomin Li', 'Yixuan Liu', 'Takashi Isobe', 'Xu Jia', 'Qinpeng Cui', 'Dong Zhou', 'Dong Li', 'You He', 'Huchuan Lu', 'Zhongdao Wang', 'Emad Barsoum']
['cs.CV']
In text-to-image (T2I) generation applications, negative embeddings have proven to be a simple yet effective approach for enhancing generation quality. Typically, these negative embeddings are derived from user-defined negative prompts, which, while being functional, are not necessarily optimal. In this paper, we intro...
2024-12-27T13:31:55Z
Code: https://github.com/AMD-AIG-AIMA/ReNeg
null
null
null
null
null
null
null
null
null
2,412.19638
Xmodel-2 Technical Report
['Wang Qun', 'Liu Yang', 'Lin Qingquan', 'Qu Zhijiu', 'Jiang Ling']
['cs.AI']
Xmodel-2 is a 1.2-billion-parameter large language model designed specifically for reasoning tasks. Its architecture enables different model scales to share a unified set of hyperparameters, allowing for extensive experimentation on smaller models and seamless transfer of optimal configurations to larger models. To max...
2024-12-27T13:32:10Z
null
null
null
null
null
null
null
null
null
null
2,412.19712
From Elements to Design: A Layered Approach for Automatic Graphic Design Composition
['Jiawei Lin', 'Shizhao Sun', 'Danqing Huang', 'Ting Liu', 'Ji Li', 'Jiang Bian']
['cs.CV']
In this work, we investigate automatic design composition from multimodal graphic elements. Although recent studies have developed various generative models for graphic design, they usually face the following limitations: they only focus on certain subtasks and are far from achieving the design composition task; they d...
2024-12-27T16:13:08Z
Project Page: $\href{https://elements2design.github.io/}{\text{elements2design}}$
null
null
From Elements to Design: A Layered Approach for Automatic Graphic Design Composition
['Jiawei Lin', 'Shizhao Sun', 'Danqing Huang', 'Ting Liu', 'Ji Li', 'Jiang Bian']
2,024
arXiv.org
0
0
['Computer Science']
2,412.19723
OS-Genesis: Automating GUI Agent Trajectory Construction via Reverse Task Synthesis
['Qiushi Sun', 'Kanzhi Cheng', 'Zichen Ding', 'Chuanyang Jin', 'Yian Wang', 'Fangzhi Xu', 'Zhenyu Wu', 'Chengyou Jia', 'Liheng Chen', 'Zhoumianze Liu', 'Ben Kao', 'Guohao Li', 'Junxian He', 'Yu Qiao', 'Zhiyong Wu']
['cs.AI', 'cs.CL', 'cs.CV', 'cs.HC']
Graphical User Interface (GUI) agents powered by Vision-Language Models (VLMs) have demonstrated human-like computer control capability. Despite their utility in advancing digital automation, a critical bottleneck persists: collecting high-quality trajectory data for training. Common practices for collecting such data ...
2024-12-27T16:21:58Z
ACL 2025 Camera Ready
null
null
OS-Genesis: Automating GUI Agent Trajectory Construction via Reverse Task Synthesis
['Qiushi Sun', 'Kanzhi Cheng', 'Zichen Ding', 'Chuanyang Jin', 'Yian Wang', 'Fangzhi Xu', 'Zhenyu Wu', 'Chengyou Jia', 'Liheng Chen', 'Zhoumianze Liu', 'Ben Kao', 'Guohao Li', 'Junxian He', 'Yu Qiao', 'Zhiyong Wu']
2,024
arXiv.org
26
54
['Computer Science']
2,412.20404
Open-Sora: Democratizing Efficient Video Production for All
['Zangwei Zheng', 'Xiangyu Peng', 'Tianji Yang', 'Chenhui Shen', 'Shenggui Li', 'Hongxin Liu', 'Yukun Zhou', 'Tianyi Li', 'Yang You']
['cs.CV']
Vision and language are the two foundational senses for humans, and they build up our cognitive ability and intelligence. While significant breakthroughs have been made in AI language ability, artificial visual intelligence, especially the ability to generate and simulate the world we see, is far lagging behind. To fac...
2024-12-29T08:52:49Z
null
null
null
null
null
null
null
null
null
null
2,412.20597
GliLem: Leveraging GliNER for Contextualized Lemmatization in Estonian
['Aleksei Dorkin', 'Kairit Sirts']
['cs.CL']
We present GliLem -- a novel hybrid lemmatization system for Estonian that enhances the highly accurate rule-based morphological analyzer Vabamorf with an external disambiguation module based on GliNER -- an open vocabulary NER model that is able to match text spans with text labels in natural language. We leverage the...
2024-12-29T22:02:00Z
Accepted to NoDaLiDa/Baltic-HLT 2025. Minor presentation and formatting fixes
null
null
GliLem: Leveraging GliNER for Contextualized Lemmatization in Estonian
['Aleksei Dorkin', 'Kairit Sirts']
2,024
arXiv.org
0
29
['Computer Science']
2,412.21037
TangoFlux: Super Fast and Faithful Text to Audio Generation with Flow Matching and Clap-Ranked Preference Optimization
['Chia-Yu Hung', 'Navonil Majumder', 'Zhifeng Kong', 'Ambuj Mehrish', 'Amir Ali Bagherzadeh', 'Chuan Li', 'Rafael Valle', 'Bryan Catanzaro', 'Soujanya Poria']
['cs.SD', 'cs.AI', 'cs.CL', 'eess.AS']
We introduce TangoFlux, an efficient Text-to-Audio (TTA) generative model with 515M parameters, capable of generating up to 30 seconds of 44.1kHz audio in just 3.7 seconds on a single A40 GPU. A key challenge in aligning TTA models lies in the difficulty of creating preference pairs, as TTA lacks structured mechanisms ...
2024-12-30T16:02:44Z
https://tangoflux.github.io/
null
null
TangoFlux: Super Fast and Faithful Text to Audio Generation with Flow Matching and Clap-Ranked Preference Optimization
['Chia-Yu Hung', 'Navonil Majumder', 'Zhifeng Kong', 'Ambuj Mehrish', 'Rafael Valle', 'Bryan Catanzaro', 'Soujanya Poria']
2,024
arXiv.org
10
54
['Computer Science', 'Engineering']
2,412.21139
Training Software Engineering Agents and Verifiers with SWE-Gym
['Jiayi Pan', 'Xingyao Wang', 'Graham Neubig', 'Navdeep Jaitly', 'Heng Ji', 'Alane Suhr', 'Yizhe Zhang']
['cs.SE', 'cs.CL']
We present SWE-Gym, the first environment for training real-world software engineering (SWE) agents. SWE-Gym contains 2,438 real-world Python task instances, each comprising a codebase with an executable runtime environment, unit tests, and a task specified in natural language. We use SWE-Gym to train language model ba...
2024-12-30T18:15:39Z
Accepted at ICML 2025. Code at https://github.com/SWE-Gym/SWE-Gym
null
null
Training Software Engineering Agents and Verifiers with SWE-Gym
['Jiayi Pan', 'Xingyao Wang', 'Graham Neubig', 'N. Jaitly', 'Heng Ji', 'Alane Suhr', 'Yizhe Zhang']
2,024
arXiv.org
50
51
['Computer Science']
2,412.2114
Facilitating large language model Russian adaptation with Learned Embedding Propagation
['Mikhail Tikhomirov', 'Daniil Chernyshev']
['cs.CL', 'cs.AI']
Rapid advancements of large language model (LLM) technologies led to the introduction of powerful open-source instruction-tuned LLMs that have the same text generation quality as the state-of-the-art counterparts such as GPT-4. While the emergence of such models accelerates the adoption of LLM technologies in sensitive...
2024-12-30T18:15:45Z
Preprint version of an article published in the Journal of Language and Education. Copyright held by the owner/author(s). Publication rights licensed to the Journal of Language and Education
null
null
Facilitating large language model Russian adaptation with Learned Embedding Propagation
['M. Tikhomirov', 'D. Chernyshev']
2,024
Journal of Language and Education
1
41
['Computer Science']
2,501.00062
ELECTRA and GPT-4o: Cost-Effective Partners for Sentiment Analysis
['James P. Beno']
['cs.CL', 'cs.AI', 'I.2.7']
Bidirectional transformers excel at sentiment analysis, and Large Language Models (LLM) are effective zero-shot learners. Might they perform better as a team? This paper explores collaborative approaches between ELECTRA and GPT-4o for three-way sentiment classification. We fine-tuned (FT) four models (ELECTRA Base/Larg...
2024-12-29T05:29:52Z
19 pages, 4 figures. Source code and data available at https://github.com/jbeno/sentiment
Proceedings of the 4th International Workshop on Knowledge-Augmented Methods for Natural Language Processing, Association for Computational Linguistics, Albuquerque, New Mexico, USA (2025) 18-36
null
null
null
null
null
null
null
null
2,501.00114
DiCoW: Diarization-Conditioned Whisper for Target Speaker Automatic Speech Recognition
['Alexander Polok', 'Dominik Klement', 'Martin Kocour', 'Jiangyu Han', 'Federico Landini', 'Bolaji Yusuf', 'Matthew Wiesner', 'Sanjeev Khudanpur', 'Jan Černocký', 'Lukáš Burget']
['eess.AS', 'cs.SD']
Speaker-attributed automatic speech recognition (ASR) in multi-speaker environments remains a significant challenge, particularly when systems conditioned on speaker embeddings fail to generalize to unseen speakers. In this work, we propose Diarization-Conditioned Whisper (DiCoW), a novel approach to target-speaker ASR...
2024-12-30T19:24:15Z
null
null
null
null
null
null
null
null
null
null
2,501.00243
Cross-Layer Cache Aggregation for Token Reduction in Ultra-Fine-Grained Image Recognition
['Edwin Arkel Rios', 'Jansen Christopher Yuanda', 'Vincent Leon Ghanz', 'Cheng-Wei Yu', 'Bo-Cheng Lai', 'Min-Chun Hu']
['cs.CV', 'I.2; I.4']
Ultra-fine-grained image recognition (UFGIR) is a challenging task that involves classifying images within a macro-category. While traditional FGIR deals with classifying different species, UFGIR goes beyond by classifying sub-categories within a species such as cultivars of a plant. In recent times the usage of Vision...
2024-12-31T03:19:38Z
Accepted to ICASSP 2025. Main: 5 pages, 4 figures, 1 table
null
null
null
null
null
null
null
null
null
2,501.00353
RAG-Instruct: Boosting LLMs with Diverse Retrieval-Augmented Instructions
['Wanlong Liu', 'Junying Chen', 'Ke Ji', 'Li Zhou', 'Wenyu Chen', 'Benyou Wang']
['cs.CL', 'cs.AI', 'cs.LG']
Retrieval-Augmented Generation (RAG) has emerged as a key paradigm for enhancing large language models (LLMs) by incorporating external knowledge. However, current RAG methods face two limitations: (1) they only cover limited RAG scenarios. (2) They suffer from limited task diversity due to the lack of a general RAG da...
2024-12-31T09:00:51Z
null
null
null
RAG-Instruct: Boosting LLMs with Diverse Retrieval-Augmented Instructions
['Wanlong Liu', 'Junying Chen', 'Ke Ji', 'Li Zhou', 'Wenyu Chen', 'Benyou Wang']
2,024
arXiv.org
7
0
['Computer Science']
2,501.00513
CaReBench: A Fine-Grained Benchmark for Video Captioning and Retrieval
['Yifan Xu', 'Xinhao Li', 'Yichun Yang', 'Desen Meng', 'Rui Huang', 'Limin Wang']
['cs.CV', 'cs.IR', 'cs.LG']
Video understanding, including video captioning and retrieval, is still a great challenge for video-language models (VLMs). The existing video retrieval and caption benchmarks only include short descriptions, limits their ability of detailed video understanding evaluation. To address this problem, we present CaReBench,...
2024-12-31T15:53:50Z
null
null
null
CaReBench: A Fine-Grained Benchmark for Video Captioning and Retrieval
['Yifan Xu', 'Xinhao Li', 'Yichun Yang', 'Desen Meng', 'Rui Huang', 'Limin Wang']
2,024
null
0
38
['Computer Science']
2,501.00569
Probing Visual Language Priors in VLMs
['Tiange Luo', 'Ang Cao', 'Gunhee Lee', 'Justin Johnson', 'Honglak Lee']
['cs.CV', 'cs.LG']
Despite recent advances in Vision-Language Models (VLMs), they may over-rely on visual language priors existing in their training data rather than true visual reasoning. To investigate this, we introduce ViLP, a benchmark featuring deliberately out-of-distribution images synthesized via image generation models and out-...
2024-12-31T17:54:29Z
Project Page: https://vilp-team.github.io/
null
null
Probing Visual Language Priors in VLMs
['Tiange Luo', 'Ang Cao', 'Gunhee Lee', 'Justin Johnson', 'Honglak Lee']
2,024
arXiv.org
2
97
['Computer Science']
2,501.00574
VideoChat-Flash: Hierarchical Compression for Long-Context Video Modeling
['Xinhao Li', 'Yi Wang', 'Jiashuo Yu', 'Xiangyu Zeng', 'Yuhan Zhu', 'Haian Huang', 'Jianfei Gao', 'Kunchang Li', 'Yinan He', 'Chenting Wang', 'Yu Qiao', 'Yali Wang', 'Limin Wang']
['cs.CV', 'cs.LG']
Long-context video modeling is critical for multimodal large language models (MLLMs), enabling them to process movies, online video streams, and so on. Despite its advances, handling long videos remains challenging due to the difficulty in efficiently understanding the extremely long video context. This paper aims to a...
2024-12-31T18:01:23Z
null
null
null
null
null
null
null
null
null
null
2,501.00584
Online Video Understanding: OVBench and VideoChat-Online
['Zhenpeng Huang', 'Xinhao Li', 'Jiaqi Li', 'Jing Wang', 'Xiangyu Zeng', 'Cheng Liang', 'Tao Wu', 'Xi Chen', 'Liang Li', 'Limin Wang']
['cs.CV', 'cs.LG']
Multimodal Large Language Models (MLLMs) have significantly progressed in offline video understanding. However, applying these models to real-world scenarios, such as autonomous driving and human-computer interaction, presents unique challenges due to the need for real-time processing of continuous online video streams...
2024-12-31T18:17:05Z
CVPR 2025 Camera Ready Version. Project Page: https://videochat-online.github.io
null
null
Online Video Understanding: OVBench and VideoChat-Online
['Zhenpeng Huang', 'Xinhao Li', 'Jiaqi Li', 'Jing Wang', 'Xiangyun Zeng', 'Cheng Liang', 'Tao Wu', 'Xi Chen', 'Liang Li', 'Limin Wang']
2,024
Computer Vision and Pattern Recognition
0
63
['Computer Science']
2,501.00599
VideoRefer Suite: Advancing Spatial-Temporal Object Understanding with Video LLM
['Yuqian Yuan', 'Hang Zhang', 'Wentong Li', 'Zesen Cheng', 'Boqiang Zhang', 'Long Li', 'Xin Li', 'Deli Zhao', 'Wenqiao Zhang', 'Yueting Zhuang', 'Jianke Zhu', 'Lidong Bing']
['cs.CV', 'cs.AI', 'cs.LG']
Video Large Language Models (Video LLMs) have recently exhibited remarkable capabilities in general video understanding. However, they mainly focus on holistic comprehension and struggle with capturing fine-grained spatial and temporal details. Besides, the lack of high-quality object-level video instruction data and a...
2024-12-31T18:56:46Z
17 pages, 14 figures, technical report
null
null
null
null
null
null
null
null
null
2,501.00656
2 OLMo 2 Furious
['Team OLMo', 'Pete Walsh', 'Luca Soldaini', 'Dirk Groeneveld', 'Kyle Lo', 'Shane Arora', 'Akshita Bhagia', 'Yuling Gu', 'Shengyi Huang', 'Matt Jordan', 'Nathan Lambert', 'Dustin Schwenk', 'Oyvind Tafjord', 'Taira Anderson', 'David Atkinson', 'Faeze Brahman', 'Christopher Clark', 'Pradeep Dasigi', 'Nouha Dziri', 'Micha...
['cs.CL', 'cs.LG']
We present OLMo 2, the next generation of our fully open language models. OLMo 2 includes dense autoregressive models with improved architecture and training recipe, pretraining data mixtures, and instruction tuning recipes. Our modified model architecture and training recipe achieve both better training stability and ...
2024-12-31T21:55:10Z
Model demo available at playground.allenai.org
null
null
null
null
null
null
null
null
null
2,501.00658
Understanding and Mitigating Bottlenecks of State Space Models through the Lens of Recency and Over-smoothing
['Peihao Wang', 'Ruisi Cai', 'Yuehao Wang', 'Jiajun Zhu', 'Pragya Srivastava', 'Zhangyang Wang', 'Pan Li']
['cs.LG']
Structured State Space Models (SSMs) have emerged as alternatives to transformers. While SSMs are often regarded as effective in capturing long-sequence dependencies, we rigorously demonstrate that they are inherently limited by strong recency bias. Our empirical studies also reveal that this bias impairs the models' a...
2024-12-31T22:06:39Z
International Conference on Learning Representations (ICLR), 2025
null
null
null
null
null
null
null
null
null
2,501.00874
LUSIFER: Language Universal Space Integration for Enhanced Multilingual Embeddings with Large Language Models
['Hieu Man', 'Nghia Trung Ngo', 'Viet Dac Lai', 'Ryan A. Rossi', 'Franck Dernoncourt', 'Thien Huu Nguyen']
['cs.CL', 'cs.IR']
Recent advancements in large language models (LLMs) based embedding models have established new state-of-the-art benchmarks for text embedding tasks, particularly in dense vector-based retrieval. However, these models predominantly focus on English, leaving multilingual embedding capabilities largely unexplored. To add...
2025-01-01T15:43:07Z
null
null
null
null
null
null
null
null
null
null
2,501.00895
Text2Earth: Unlocking Text-driven Remote Sensing Image Generation with a Global-Scale Dataset and a Foundation Model
['Chenyang Liu', 'Keyan Chen', 'Rui Zhao', 'Zhengxia Zou', 'Zhenwei Shi']
['cs.CV']
Generative foundation models have advanced large-scale text-driven natural image generation, becoming a prominent research trend across various vertical domains. However, in the remote sensing field, there is still a lack of research on large-scale text-to-image (text2image) generation technology. Existing remote sensi...
2025-01-01T16:56:43Z
null
null
null
Text2Earth: Unlocking Text-driven Remote Sensing Image Generation with a Global-Scale Dataset and a Foundation Model
['Chenyang Liu', 'Ke-Yu Chen', 'Ruiyun Zhao', 'Zhengxia Zou', 'Z. Shi']
2,025
IEEE Geoscience and Remote Sensing Magazine
12
0
['Computer Science']
2,501.01028
KaLM-Embedding: Superior Training Data Brings A Stronger Embedding Model
['Xinshuo Hu', 'Zifei Shan', 'Xinping Zhao', 'Zetian Sun', 'Zhenyu Liu', 'Dongfang Li', 'Shaolin Ye', 'Xinyuan Wei', 'Qian Chen', 'Baotian Hu', 'Haofen Wang', 'Jun Yu', 'Min Zhang']
['cs.CL']
As retrieval-augmented generation prevails in large language models, embedding models are becoming increasingly crucial. Despite the growing number of general embedding models, prior work often overlooks the critical role of training data quality. In this work, we introduce KaLM-Embedding, a general multilingual embedd...
2025-01-02T03:17:51Z
Technical Report. 23 pages, 6 figures, 10 tables
null
null
KaLM-Embedding: Superior Training Data Brings A Stronger Embedding Model
['Xinshuo Hu', 'Zifei Shan', 'Xinping Zhao', 'Zetian Sun', 'Zhenyu Liu', 'Dongfang Li', 'Shaolin Ye', 'Xinyuan Wei', 'Qian Chen', 'Baotian Hu', 'Haofen Wang', 'Jun Yu', 'Min Zhang']
2,025
arXiv.org
3
0
['Computer Science']
2,501.01034
Advancing Singlish Understanding: Bridging the Gap with Datasets and Multimodal Models
['Bin Wang', 'Xunlong Zou', 'Shuo Sun', 'Wenyu Zhang', 'Yingxu He', 'Zhuohan Liu', 'Chengwei Wei', 'Nancy F. Chen', 'AiTi Aw']
['cs.CL', 'cs.SD', 'eess.AS']
Singlish, a Creole language rooted in English, is a key focus in linguistic research within multilingual and multicultural contexts. However, its spoken form remains underexplored, limiting insights into its linguistic structure and applications. To address this gap, we standardize and annotate the largest spoken Singl...
2025-01-02T03:28:52Z
Open-Source: https://github.com/AudioLLMs/Singlish
null
null
Advancing Singlish Understanding: Bridging the Gap with Datasets and Multimodal Models
['Bin Wang', 'Xunlong Zou', 'Shuo Sun', 'Wenyu Zhang', 'Yingxu He', 'Zhuohan Liu', 'Chengwei Wei', 'Nancy F. Chen', 'AiTi Aw']
2,025
arXiv.org
4
0
['Computer Science', 'Engineering']
2,501.01054
Dynamic Scaling of Unit Tests for Code Reward Modeling
['Zeyao Ma', 'Xiaokang Zhang', 'Jing Zhang', 'Jifan Yu', 'Sijia Luo', 'Jie Tang']
['cs.CL', 'cs.SE']
Current large language models (LLMs) often struggle to produce accurate responses on the first attempt for complex reasoning tasks like code generation. Prior research tackles this challenge by generating multiple candidate solutions and validating them with LLM-generated unit tests. The execution results of unit tests...
2025-01-02T04:33:31Z
Homepage: https://code-reward-model.github.io/
null
null
null
null
null
null
null
null
null
2,501.01097
EliGen: Entity-Level Controlled Image Generation with Regional Attention
['Hong Zhang', 'Zhongjie Duan', 'Xingjun Wang', 'Yingda Chen', 'Yu Zhang']
['cs.CV']
Recent advancements in diffusion models have significantly advanced text-to-image generation, yet global text prompts alone remain insufficient for achieving fine-grained control over individual entities within an image. To address this limitation, we present EliGen, a novel framework for Entity-level controlled image ...
2025-01-02T06:46:13Z
null
null
null
EliGen: Entity-Level Controlled Image Generation with Regional Attention
['Hong Zhang', 'Zhongjie Duan', 'Xingjun Wang', 'Yingda Chen', 'Yu Zhang']
2,025
arXiv.org
6
36
['Computer Science']
2,501.0132
SeedVR: Seeding Infinity in Diffusion Transformer Towards Generic Video Restoration
['Jianyi Wang', 'Zhijie Lin', 'Meng Wei', 'Yang Zhao', 'Ceyuan Yang', 'Fei Xiao', 'Chen Change Loy', 'Lu Jiang']
['cs.CV']
Video restoration poses non-trivial challenges in maintaining fidelity while recovering temporally consistent details from unknown degradations in the wild. Despite recent advances in diffusion-based restoration, these methods often face limitations in generation capability and sampling efficiency. In this work, we pre...
2025-01-02T16:19:48Z
CVPR25 CR ver., add a co-author additionally. Project page: https://iceclear.github.io/projects/seedvr/
null
null
SeedVR: Seeding Infinity in Diffusion Transformer Towards Generic Video Restoration
['Jianyi Wang', 'Zhijie Lin', 'Meng Wei', 'Yang Zhao', 'Ceyuan Yang', 'Chen Change Loy', 'Lu Jiang']
2,025
Computer Vision and Pattern Recognition
7
81
['Computer Science']
2,501.01423
Reconstruction vs. Generation: Taming Optimization Dilemma in Latent Diffusion Models
['Jingfeng Yao', 'Bin Yang', 'Xinggang Wang']
['cs.CV', 'cs.LG']
Latent diffusion models with Transformer architectures excel at generating high-fidelity images. However, recent studies reveal an optimization dilemma in this two-stage design: while increasing the per-token feature dimension in visual tokenizers improves reconstruction quality, it requires substantially larger diffus...
2025-01-02T18:59:40Z
Models and codes are available at: https://github.com/hustvl/LightningDiT
null
null
Reconstruction vs. Generation: Taming Optimization Dilemma in Latent Diffusion Models
['Jingfeng Yao', 'Xinggang Wang']
2,025
arXiv.org
32
45
['Computer Science']
2,501.01428
GPT4Scene: Understand 3D Scenes from Videos with Vision-Language Models
['Zhangyang Qi', 'Zhixiong Zhang', 'Ye Fang', 'Jiaqi Wang', 'Hengshuang Zhao']
['cs.CV']
In recent years, 2D Vision-Language Models (VLMs) have made significant strides in image-text understanding tasks. However, their performance in 3D spatial comprehension, which is critical for embodied intelligence, remains limited. Recent advances have leveraged 3D point clouds and multi-view images as inputs, yieldin...
2025-01-02T18:59:59Z
Project page: https://gpt4scene.github.io/
null
null
GPT4Scene: Understand 3D Scenes from Videos with Vision-Language Models
['Zhangyang Qi', 'Zhixiong Zhang', 'Ye Fang', 'Jiaqi Wang', 'Hengshuang Zhao']
2,025
arXiv.org
16
128
['Computer Science']
2,501.01668
CoT-based Synthesizer: Enhancing LLM Performance through Answer Synthesis
['Bohan Zhang', 'Xiaokang Zhang', 'Jing Zhang', 'Jifan Yu', 'Sijia Luo', 'Jie Tang']
['cs.CL']
Current inference scaling methods, such as Self-consistency and Best-of-N, have proven effective in improving the accuracy of LLMs on complex reasoning tasks. However, these methods rely heavily on the quality of candidate responses and are unable to produce correct answers when all candidates are incorrect. In this pa...
2025-01-03T06:50:06Z
Accepted as Main of ACL2025
null
null
null
null
null
null
null
null
null
2,501.01709
MoVE-KD: Knowledge Distillation for VLMs with Mixture of Visual Encoders
['Jiajun Cao', 'Yuan Zhang', 'Tao Huang', 'Ming Lu', 'Qizhe Zhang', 'Ruichuan An', 'Ningning MA', 'Shanghang Zhang']
['cs.CV', 'cs.AI']
Visual encoders are fundamental components in vision-language models (VLMs), each showcasing unique strengths derived from various pre-trained visual foundation models. To leverage the various capabilities of these encoders, recent studies incorporate multiple encoders within a single VLM, leading to a considerable inc...
2025-01-03T09:10:34Z
Accepted by CVPR 2025
null
null
null
null
null
null
null
null
null
2,501.01811
QuantumBind-RBFE: Accurate Relative Binding Free Energy Calculations Using Neural Network Potentials
['Francesc Sabanés Zariquiey', 'Stephen E. Farr', 'Stefan Doerr', 'Gianni De Fabritiis']
['physics.chem-ph', 'cs.LG', 'physics.comp-ph']
Accurate prediction of protein-ligand binding affinities is crucial in drug discovery, particularly during hit-to-lead and lead optimization phases, however, limitations in ligand force fields continue to impact prediction accuracy. In this work, we validate relative binding free energy (RBFE) accuracy using neural net...
2025-01-03T13:51:02Z
null
null
null
null
null
null
null
null
null
null
2,501.01895
EnerVerse: Envisioning Embodied Future Space for Robotics Manipulation
['Siyuan Huang', 'Liliang Chen', 'Pengfei Zhou', 'Shengcong Chen', 'Zhengkai Jiang', 'Yue Hu', 'Yue Liao', 'Peng Gao', 'Hongsheng Li', 'Maoqing Yao', 'Guanghui Ren']
['cs.RO', 'cs.CV', 'cs.LG']
We introduce EnerVerse, a generative robotics foundation model that constructs and interprets embodied spaces. EnerVerse employs an autoregressive video diffusion framework to predict future embodied spaces from instructions, enhanced by a sparse context memory for long-term reasoning. To model the 3D robotics world, w...
2025-01-03T17:00:33Z
Website: https://sites.google.com/view/enerverse
null
null
null
null
null
null
null
null
null
2,501.01904
Virgo: A Preliminary Exploration on Reproducing o1-like MLLM
['Yifan Du', 'Zikang Liu', 'Yifan Li', 'Wayne Xin Zhao', 'Yuqi Huo', 'Bingning Wang', 'Weipeng Chen', 'Zheng Liu', 'Zhongyuan Wang', 'Ji-Rong Wen']
['cs.CV', 'cs.AI']
Recently, slow-thinking reasoning systems, built upon large language models (LLMs), have garnered widespread attention by scaling the thinking time during inference. There is also growing interest in adapting this capability to multimodal large language models (MLLMs). Given that MLLMs handle more complex data semantic...
2025-01-03T17:14:16Z
Technical Report on Slow Thinking with LLMs: Visual Reasoning
null
null
Virgo: A Preliminary Exploration on Reproducing o1-like MLLM
['Yifan Du', 'Zikang Liu', 'Yifan Li', 'Wayne Xin Zhao', 'Yuqi Huo', 'Bingning Wang', 'Weipeng Chen', 'Zheng Liu', 'Zhongyuan Wang', 'Jiahui Wen']
2,025
arXiv.org
36
0
['Computer Science']
2,501.01957
VITA-1.5: Towards GPT-4o Level Real-Time Vision and Speech Interaction
['Chaoyou Fu', 'Haojia Lin', 'Xiong Wang', 'Yi-Fan Zhang', 'Yunhang Shen', 'Xiaoyu Liu', 'Haoyu Cao', 'Zuwei Long', 'Heting Gao', 'Ke Li', 'Long Ma', 'Xiawu Zheng', 'Rongrong Ji', 'Xing Sun', 'Caifeng Shan', 'Ran He']
['cs.CV', 'cs.SD', 'eess.AS']
Recent Multimodal Large Language Models (MLLMs) have typically focused on integrating visual and textual modalities, with less emphasis placed on the role of speech in enhancing interaction. However, speech plays a crucial role in multimodal dialogue systems, and implementing high-performance in both vision and speech ...
2025-01-03T18:59:52Z
https://github.com/VITA-MLLM/VITA (2K+ Stars by now)
null
null
VITA-1.5: Towards GPT-4o Level Real-Time Vision and Speech Interaction
['Chaoyou Fu', 'Haojia Lin', 'Xiong Wang', 'Yi-Fan Zhang', 'Yunhang Shen', 'Xiaoyu Liu', 'Yangze Li', 'Zuwei Long', 'Heting Gao', 'Ke Li', 'Xiawu Zheng', 'Rongrong Ji', 'Xing Sun', 'Caifeng Shan', 'Ran He']
2,025
arXiv.org
54
64
['Computer Science', 'Engineering']
2,501.02045
METAGENE-1: Metagenomic Foundation Model for Pandemic Monitoring
['Ollie Liu', 'Sami Jaghouar', 'Johannes Hagemann', 'Shangshang Wang', 'Jason Wiemels', 'Jeff Kaufman', 'Willie Neiswanger']
['q-bio.GN', 'cs.AI', 'cs.CL', 'cs.LG']
We pretrain METAGENE-1, a 7-billion-parameter autoregressive transformer model, which we refer to as a metagenomic foundation model, on a novel corpus of diverse metagenomic DNA and RNA sequences comprising over 1.5 trillion base pairs. This dataset is sourced from a large collection of human wastewater samples, proces...
2025-01-03T18:44:43Z
null
null
null
METAGENE-1: Metagenomic Foundation Model for Pandemic Monitoring
['Ollie Liu', 'Sami Jaghouar', 'Johannes Hagemann', 'Shangshang Wang', 'Jason Wiemels', 'Jeff Kaufman', 'W. Neiswanger']
2,025
arXiv.org
6
0
['Biology', 'Computer Science']
2,501.0226
MagicFace: High-Fidelity Facial Expression Editing with Action-Unit Control
['Mengting Wei', 'Tuomas Varanka', 'Xingxun Jiang', 'Huai-Qian Khor', 'Guoying Zhao']
['cs.CV']
We address the problem of facial expression editing by controling the relative variation of facial action-unit (AU) from the same person. This enables us to edit this specific person's expression in a fine-grained, continuous and interpretable manner, while preserving their identity, pose, background and detailed facia...
2025-01-04T11:28:49Z
null
null
null
null
null
null
null
null
null
null
2,501.02393
Graph-Aware Isomorphic Attention for Adaptive Dynamics in Transformers
['Markus J. Buehler']
['cs.LG', 'cond-mat.mes-hall', 'cond-mat.mtrl-sci', 'cs.AI', 'cs.CL']
We present an approach to modifying Transformer architectures by integrating graph-aware relational reasoning into the attention mechanism, merging concepts from graph neural networks and language modeling. Building on the inherent connection between attention and graph theory, we reformulate the Transformer's attentio...
2025-01-04T22:30:21Z
null
null
null
Graph-Aware Isomorphic Attention for Adaptive Dynamics in Transformers
['Markus J. Buehler']
2,025
APL Machine Learning
3
72
['Computer Science', 'Physics']
2,501.02448
Understand, Solve and Translate: Bridging the Multilingual Mathematical Reasoning Gap
['Hyunwoo Ko', 'Guijin Son', 'Dasol Choi']
['cs.CL']
Large language models (LLMs) demonstrate exceptional performance on complex reasoning tasks. However, despite their strong reasoning capabilities in high-resource languages (e.g., English and Chinese), a significant performance gap persists in other languages. To investigate this gap in Korean, we introduce HRM8K, a be...
2025-01-05T05:57:22Z
18 pages, 14 figures, 9 tables
null
null
null
null
null
null
null
null
null
2,501.02464
Depth Any Camera: Zero-Shot Metric Depth Estimation from Any Camera
['Yuliang Guo', 'Sparsh Garg', 'S. Mahdi H. Miangoleh', 'Xinyu Huang', 'Liu Ren']
['cs.CV', 'cs.AI', 'cs.RO']
While recent depth foundation models exhibit strong zero-shot generalization, achieving accurate metric depth across diverse camera types-particularly those with large fields of view (FoV) such as fisheye and 360-degree cameras-remains a significant challenge. This paper presents Depth Any Camera (DAC), a powerful zero...
2025-01-05T07:22:40Z
null
null
null
Depth Any Camera: Zero-Shot Metric Depth Estimation from Any Camera
['Yuliang Guo', 'Sparsh Garg', 'S. Mahdi H. Miangoleh', 'Xinyu Huang', 'Liu Ren']
2,025
arXiv.org
4
62
['Computer Science']
2,501.02487
ACE++: Instruction-Based Image Creation and Editing via Context-Aware Content Filling
['Chaojie Mao', 'Jingfeng Zhang', 'Yulin Pan', 'Zeyinzi Jiang', 'Zhen Han', 'Yu Liu', 'Jingren Zhou']
['cs.CV']
We report ACE++, an instruction-based diffusion framework that tackles various image generation and editing tasks. Inspired by the input format for the inpainting task proposed by FLUX.1-Fill-dev, we improve the Long-context Condition Unit (LCU) introduced in ACE and extend this input paradigm to any editing and genera...
2025-01-05T09:40:58Z
null
null
null
null
null
null
null
null
null
null
2,501.02523
Face-MakeUp: Multimodal Facial Prompts for Text-to-Image Generation
['Dawei Dai', 'Mingming Jia', 'Yinxiu Zhou', 'Hang Xing', 'Chenghang Li']
['cs.CV', 'cs.AI']
Facial images have extensive practical applications. Although the current large-scale text-image diffusion models exhibit strong generation capabilities, it is challenging to generate the desired facial images using only text prompt. Image prompts are a logical choice. However, current methods of this type generally fo...
2025-01-05T12:46:31Z
null
null
null
Face-MakeUp: Multimodal Facial Prompts for Text-to-Image Generation
['Dawei Dai', 'Mingming Jia', 'Yinxiu Zhou', 'Hang Xing', 'Chenghang Li']
2,025
arXiv.org
1
0
['Computer Science']
2,501.02576
DepthMaster: Taming Diffusion Models for Monocular Depth Estimation
['Ziyang Song', 'Zerong Wang', 'Bo Li', 'Hao Zhang', 'Ruijie Zhu', 'Li Liu', 'Peng-Tao Jiang', 'Tianzhu Zhang']
['cs.CV']
Monocular depth estimation within the diffusion-denoising paradigm demonstrates impressive generalization ability but suffers from low inference speed. Recent methods adopt a single-step deterministic paradigm to improve inference efficiency while maintaining comparable performance. However, they overlook the gap betwe...
2025-01-05T15:18:32Z
11 pages, 6 figures, 6 tables
null
null
null
null
null
null
null
null
null
2,501.02629
Layer-Level Self-Exposure and Patch: Affirmative Token Mitigation for Jailbreak Attack Defense
['Yang Ouyang', 'Hengrui Gu', 'Shuhang Lin', 'Wenyue Hua', 'Jie Peng', 'Bhavya Kailkhura', 'Meijun Gao', 'Tianlong Chen', 'Kaixiong Zhou']
['cs.CR', 'cs.AI', 'cs.CL']
As large language models (LLMs) are increasingly deployed in diverse applications, including chatbot assistants and code generation, aligning their behavior with safety and ethical standards has become paramount. However, jailbreak attacks, which exploit vulnerabilities to elicit unintended or harmful outputs, threaten...
2025-01-05T19:06:03Z
14 pages, 4 figures, conference
null
null
null
null
null
null
null
null
null
2,501.02669
Generalizing from SIMPLE to HARD Visual Reasoning: Can We Mitigate Modality Imbalance in VLMs?
['Simon Park', 'Abhishek Panigrahi', 'Yun Cheng', 'Dingli Yu', 'Anirudh Goyal', 'Sanjeev Arora']
['cs.CV', 'cs.CL', 'cs.LG']
Vision Language Models (VLMs) are impressive at visual question answering and image captioning. But they underperform on multi-step visual reasoning -- even compared to LLMs on the same tasks presented in text form -- giving rise to perceptions of modality imbalance or brittleness. Towards a systematic study of such is...
2025-01-05T21:36:38Z
null
null
null
null
null
null
null
null
null
null
2,501.0279
Segmenting Text and Learning Their Rewards for Improved RLHF in Language Model
['Yueqin Yin', 'Shentao Yang', 'Yujia Xie', 'Ziyi Yang', 'Yuting Sun', 'Hany Awadalla', 'Weizhu Chen', 'Mingyuan Zhou']
['cs.CL', 'cs.AI']
Reinforcement learning from human feedback (RLHF) has been widely adopted to align language models (LMs) with human preference. Prior RLHF works typically take a bandit formulation, which, though intuitive, ignores the sequential nature of LM generation and can suffer from the sparse reward issue. While recent works pr...
2025-01-06T06:17:56Z
null
null
null
null
null
null
null
null
null
null
2,501.02976
STAR: Spatial-Temporal Augmentation with Text-to-Video Models for Real-World Video Super-Resolution
['Rui Xie', 'Yinhong Liu', 'Penghao Zhou', 'Chen Zhao', 'Jun Zhou', 'Kai Zhang', 'Zhenyu Zhang', 'Jian Yang', 'Zhenheng Yang', 'Ying Tai']
['cs.CV']
Image diffusion models have been adapted for real-world video super-resolution to tackle over-smoothing issues in GAN-based methods. However, these models struggle to maintain temporal consistency, as they are trained on static images, limiting their ability to capture temporal dynamics effectively. Integrating text-to...
2025-01-06T12:36:21Z
null
null
null
null
null
null
null
null
null
null
2,501.02979
Registering Source Tokens to Target Language Spaces in Multilingual Neural Machine Translation
['Zhi Qu', 'Yiran Wang', 'Jiannan Mao', 'Chenchen Ding', 'Hideki Tanaka', 'Masao Utiyama', 'Taro Watanabe']
['cs.CL']
The multilingual neural machine translation (MNMT) aims for arbitrary translations across multiple languages. Although MNMT-specific models trained on parallel data offer low costs in training and deployment, their performance consistently lags behind that of large language models (LLMs). In this work, we introduce reg...
2025-01-06T12:42:54Z
Accepted by ACL 2025 (main)
null
null
Registering Source Tokens to Target Language Spaces in Multilingual Neural Machine Translation
['Zhi Qu', 'Yiran Wang', 'Jiannan Mao', 'Chenchen Ding', 'Hideki Tanaka', 'Masao Utiyama', 'Taro Watanabe']
2,025
arXiv.org
0
61
['Computer Science']
2,501.03006
TransPixeler: Advancing Text-to-Video Generation with Transparency
['Luozhou Wang', 'Yijun Li', 'Zhifei Chen', 'Jui-Hsien Wang', 'Zhifei Zhang', 'He Zhang', 'Zhe Lin', 'Yingcong Chen']
['cs.CV']
Text-to-video generative models have made significant strides, enabling diverse applications in entertainment, advertising, and education. However, generating RGBA video, which includes alpha channels for transparency, remains a challenge due to limited datasets and the difficulty of adapting existing models. Alpha cha...
2025-01-06T13:32:16Z
Project page: https://wileewang.github.io/TransPixar/
null
null
TransPixeler: Advancing Text-to-Video Generation with Transparency
['Luozhou Wang', 'Yijun Li', 'Zhifei Chen', 'Jui-Hsien Wang', 'Zhifei Zhang', 'He Zhang', 'Zhe Lin', 'Yingcong Chen']
2,025
arXiv.org
2
0
['Computer Science']
2,501.03124
PRMBench: A Fine-grained and Challenging Benchmark for Process-Level Reward Models
['Mingyang Song', 'Zhaochen Su', 'Xiaoye Qu', 'Jiawei Zhou', 'Yu Cheng']
['cs.CL', 'cs.AI', 'cs.LG']
Process-level Reward Models (PRMs) are crucial for complex reasoning and decision-making tasks, where each intermediate step plays an important role in the reasoning process. Since language models are prone to various types of errors during the reasoning process, PRMs are required to possess nuanced capabilities for de...
2025-01-06T16:31:45Z
Accepted by ACL 2025 Main. Project Page: https://prmbench.github.io/
null
null
null
null
null
null
null
null
null
2,501.03172
GLiREL -- Generalist Model for Zero-Shot Relation Extraction
['Jack Boylan', 'Chris Hokamp', 'Demian Gholipour Ghalandari']
['cs.CL', 'cs.AI', 'cs.LG']
We introduce GLiREL (Generalist Lightweight model for zero-shot Relation Extraction), an efficient architecture and training paradigm for zero-shot relation classification. Inspired by recent advancements in zero-shot named entity recognition, this work presents an approach to efficiently and accurately predict zero-sh...
2025-01-06T17:42:29Z
Submitted to NAACL 2025
null
null
null
null
null
null
null
null
null
2,501.03218
Dispider: Enabling Video LLMs with Active Real-Time Interaction via Disentangled Perception, Decision, and Reaction
['Rui Qian', 'Shuangrui Ding', 'Xiaoyi Dong', 'Pan Zhang', 'Yuhang Zang', 'Yuhang Cao', 'Dahua Lin', 'Jiaqi Wang']
['cs.CV']
Active Real-time interaction with video LLMs introduces a new paradigm for human-computer interaction, where the model not only understands user intent but also responds while continuously processing streaming video on the fly. Unlike offline video LLMs, which analyze the entire video before answering questions, active...
2025-01-06T18:55:10Z
null
null
null
Dispider: Enabling Video LLMs with Active Real-Time Interaction via Disentangled Perception, Decision, and Reaction
['Rui Qian', 'Shuangrui Ding', 'Xiao-wen Dong', 'Pan Zhang', 'Yuhang Zang', 'Yuhang Cao', 'Dahua Lin', 'Jiaqi Wang']
2,025
arXiv.org
8
0
['Computer Science']
2,501.03468
MTRAG: A Multi-Turn Conversational Benchmark for Evaluating Retrieval-Augmented Generation Systems
['Yannis Katsis', 'Sara Rosenthal', 'Kshitij Fadnis', 'Chulaka Gunasekara', 'Young-Suk Lee', 'Lucian Popa', 'Vraj Shah', 'Huaiyu Zhu', 'Danish Contractor', 'Marina Danilevsky']
['cs.CL', 'cs.AI']
Retrieval-augmented generation (RAG) has recently become a very popular task for Large Language Models (LLMs). Evaluating them on multi-turn RAG conversations, where the system is asked to generate a response to a question in the context of a preceding conversation is an important and often overlooked task with several...
2025-01-07T01:52:56Z
null
null
null
null
null
null
null
null
null
null
2,501.03575
Cosmos World Foundation Model Platform for Physical AI
['NVIDIA', ':', 'Niket Agarwal', 'Arslan Ali', 'Maciej Bala', 'Yogesh Balaji', 'Erik Barker', 'Tiffany Cai', 'Prithvijit Chattopadhyay', 'Yongxin Chen', 'Yin Cui', 'Yifan Ding', 'Daniel Dworakowski', 'Jiaojiao Fan', 'Michele Fenzi', 'Francesco Ferroni', 'Sanja Fidler', 'Dieter Fox', 'Songwei Ge', 'Yunhao Ge', 'Jinwei G...
['cs.CV', 'cs.AI', 'cs.LG', 'cs.RO']
Physical AI needs to be trained digitally first. It needs a digital twin of itself, the policy model, and a digital twin of the world, the world model. In this paper, we present the Cosmos World Foundation Model Platform to help developers build customized world models for their Physical AI setups. We position a world ...
2025-01-07T06:55:50Z
null
null
null
Cosmos World Foundation Model Platform for Physical AI
['Nvidia Niket Agarwal', 'Arslan Ali', 'Maciej Bala', 'Yogesh Balaji', 'Erik Barker', 'Tiffany Cai', 'Prithvijit Chattopadhyay', 'Yongxin Chen', 'Yin Cui', 'Yifan Ding', 'Daniel Dworakowski', 'Jiaojiao Fan', 'Michele Fenzi', 'Francesco Ferroni', 'Sanja Fidler', 'Dieter Fox', 'Songwei Ge', 'Yunhao Ge', 'Jinwei Gu', 'Sid...
2,025
arXiv.org
129
0
['Computer Science']
2,501.03699
Motion-Aware Generative Frame Interpolation
['Guozhen Zhang', 'Yuhan Zhu', 'Yutao Cui', 'Xiaotong Zhao', 'Kai Ma', 'Limin Wang']
['cs.CV']
Flow-based frame interpolation methods ensure motion stability through estimated intermediate flow but often introduce severe artifacts in complex motion regions. Recent generative approaches, boosted by large-scale pre-trained video generation models, show promise in handling intricate scenes. However, they frequently...
2025-01-07T11:03:43Z
null
null
null
null
null
null
null
null
null
null
2,501.03847
Diffusion as Shader: 3D-aware Video Diffusion for Versatile Video Generation Control
['Zekai Gu', 'Rui Yan', 'Jiahao Lu', 'Peng Li', 'Zhiyang Dou', 'Chenyang Si', 'Zhen Dong', 'Qifeng Liu', 'Cheng Lin', 'Ziwei Liu', 'Wenping Wang', 'Yuan Liu']
['cs.CV', 'cs.AI', 'cs.GR']
Diffusion models have demonstrated impressive performance in generating high-quality videos from text prompts or images. However, precise control over the video generation process, such as camera manipulation or content editing, remains a significant challenge. Existing methods for controlled video generation are typic...
2025-01-07T15:01:58Z
Project page: https://igl-hkust.github.io/das/ Codes: https://github.com/IGL-HKUST/DiffusionAsShader
null
null
null
null
null
null
null
null
null
2,501.03895
LLaVA-Mini: Efficient Image and Video Large Multimodal Models with One Vision Token
['Shaolei Zhang', 'Qingkai Fang', 'Zhe Yang', 'Yang Feng']
['cs.CV', 'cs.AI', 'cs.CL']
The advent of real-time large multimodal models (LMMs) like GPT-4o has sparked considerable interest in efficient LMMs. LMM frameworks typically encode visual inputs into vision tokens (continuous representations) and integrate them and textual instructions into the context of large language models (LLMs), where large-...
2025-01-07T16:03:14Z
Accepted to ICLR 2025. Code: https://github.com/ictnlp/LLaVA-Mini Model: https://huggingface.co/ICTNLP/llava-mini-llama-3.1-8b
null
null
LLaVA-Mini: Efficient Image and Video Large Multimodal Models with One Vision Token
['Shaolei Zhang', 'Qingkai Fang', 'Zhe Yang', 'Yang Feng']
2,025
International Conference on Learning Representations
43
59
['Computer Science']
2,501.04001
Sa2VA: Marrying SAM2 with LLaVA for Dense Grounded Understanding of Images and Videos
['Haobo Yuan', 'Xiangtai Li', 'Tao Zhang', 'Zilong Huang', 'Shilin Xu', 'Shunping Ji', 'Yunhai Tong', 'Lu Qi', 'Jiashi Feng', 'Ming-Hsuan Yang']
['cs.CV']
This work presents Sa2VA, the first unified model for dense grounded understanding of both images and videos. Unlike existing multi-modal large language models, which are often limited to specific modalities and tasks, Sa2VA supports a wide range of image and video tasks, including referring segmentation and conversati...
2025-01-07T18:58:54Z
Project page: https://lxtgh.github.io/project/sa2va
null
null
null
null
null
null
null
null
null
2,501.0418
HIVEX: A High-Impact Environment Suite for Multi-Agent Research (extended version)
['Philipp Dominic Siedler']
['cs.MA', 'cs.AI', 'cs.GT']
Games have been vital test beds for the rapid development of Agent-based research. Remarkable progress has been achieved in the past, but it is unclear if the findings equip for real-world problems. While pressure grows, some of the most critical ecological challenges can find mitigation and prevention solutions throug...
2025-01-07T23:16:31Z
null
null
null
HIVEX: A High-Impact Environment Suite for Multi-Agent Research (extended version)
['P. D. Siedler']
2,025
arXiv.org
1
0
['Computer Science']
2,501.04519
rStar-Math: Small LLMs Can Master Math Reasoning with Self-Evolved Deep Thinking
['Xinyu Guan', 'Li Lyna Zhang', 'Yifei Liu', 'Ning Shang', 'Youran Sun', 'Yi Zhu', 'Fan Yang', 'Mao Yang']
['cs.CL']
We present rStar-Math to demonstrate that small language models (SLMs) can rival or even surpass the math reasoning capability of OpenAI o1, without distillation from superior models. rStar-Math achieves this by exercising "deep thinking" through Monte Carlo Tree Search (MCTS), where a math policy SLM performs test-tim...
2025-01-08T14:12:57Z
null
null
null
rStar-Math: Small LLMs Can Master Math Reasoning with Self-Evolved Deep Thinking
['Xinyu Guan', 'L. Zhang', 'Yifei Liu', 'Ning Shang', 'Youran Sun', 'Yi Zhu', 'Fan Yang', 'Mao Yang']
2,025
arXiv.org
133
50
['Computer Science']
2,501.04561
OpenOmni: Advancing Open-Source Omnimodal Large Language Models with Progressive Multimodal Alignment and Real-Time Self-Aware Emotional Speech Synthesis
['Run Luo', 'Ting-En Lin', 'Haonan Zhang', 'Yuchuan Wu', 'Xiong Liu', 'Min Yang', 'Yongbin Li', 'Longze Chen', 'Jiaming Li', 'Lei Zhang', 'Yangyi Chen', 'Xiaobo Xia', 'Hamid Alinejad-Rokny', 'Fei Huang']
['cs.CL', 'cs.CV']
Recent advancements in omnimodal learning have significantly improved understanding and generation across images, text, and speech, yet these developments remain predominantly confined to proprietary models. The lack of high-quality omnimodal datasets and the challenges of real-time emotional speech synthesis have nota...
2025-01-08T15:18:09Z
null
null
null
null
null
null
null
null
null
null
2,501.04575
InfiGUIAgent: A Multimodal Generalist GUI Agent with Native Reasoning and Reflection
['Yuhang Liu', 'Pengxiang Li', 'Zishu Wei', 'Congkai Xie', 'Xueyu Hu', 'Xinchen Xu', 'Shengyu Zhang', 'Xiaotian Han', 'Hongxia Yang', 'Fei Wu']
['cs.AI', 'cs.CL', 'cs.HC']
Graphical User Interface (GUI) Agents, powered by multimodal large language models (MLLMs), have shown great potential for task automation on computing devices such as computers and mobile phones. However, existing agents face challenges in multi-step reasoning and reliance on textual annotations, limiting their effect...
2025-01-08T15:45:21Z
14 pages, 7 figures, work in progress
null
null
InfiGUIAgent: A Multimodal Generalist GUI Agent with Native Reasoning and Reflection
['Yuhang Liu', 'Pengxiang Li', 'Zishu Wei', 'Congkai Xie', 'Xueyu Hu', 'Xinchen Xu', 'Shengyu Zhang', 'Xiaotian Han', 'Hongxia Yang', 'Fei Wu']
2,025
arXiv.org
22
53
['Computer Science']
2,501.0467
Are They the Same? Exploring Visual Correspondence Shortcomings of Multimodal LLMs
['Yikang Zhou', 'Tao Zhang', 'Shilin Xu', 'Shihao Chen', 'Qianyu Zhou', 'Yunhai Tong', 'Shunping Ji', 'Jiangning Zhang', 'Lu Qi', 'Xiangtai Li']
['cs.CV']
Recent advancements in multimodal large language models (MLLM) have shown a strong ability in visual perception, reasoning abilities, and vision-language understanding. However, the visual matching ability of MLLMs is rarely studied, despite finding the visual correspondence of objects is essential in computer vision. ...
2025-01-08T18:30:53Z
Accepted by ICCV2025
null
null
null
null
null
null
null
null
null
2,501.04686
URSA: Understanding and Verifying Chain-of-thought Reasoning in Multimodal Mathematics
['Ruilin Luo', 'Zhuofan Zheng', 'Yifan Wang', 'Xinzhe Ni', 'Zicheng Lin', 'Songtao Jiang', 'Yiyao Yu', 'Chufan Shi', 'Ruihang Chu', 'Jin Zeng', 'Yujiu Yang']
['cs.CL', 'cs.AI', 'cs.LG']
Process Reward Models (PRMs) have shown promise in enhancing the mathematical reasoning capabilities of Large Language Models (LLMs) through Test-Time Scaling (TTS). However, their integration into multimodal reasoning remains largely unexplored. In this work, we take the first step toward unlocking the potential of PR...
2025-01-08T18:49:41Z
Update version. Project url: https://ursa-math.github.io
null
null
null
null
null
null
null
null
null
2,501.04689
SPAR3D: Stable Point-Aware Reconstruction of 3D Objects from Single Images
['Zixuan Huang', 'Mark Boss', 'Aaryaman Vasishta', 'James M. Rehg', 'Varun Jampani']
['cs.CV', 'cs.GR']
We study the problem of single-image 3D object reconstruction. Recent works have diverged into two directions: regression-based modeling and generative modeling. Regression methods efficiently infer visible surfaces, but struggle with occluded regions. Generative methods handle uncertain regions better by modeling dist...
2025-01-08T18:52:03Z
null
null
null
null
null
null
null
null
null
null