arxiv_id float64 1.5k 2.51k | title stringlengths 9 178 ⌀ | authors stringlengths 2 22.8k | categories stringlengths 4 146 | summary stringlengths 103 1.92k ⌀ | published stringdate 2015-02-06 10:44:00 2025-07-10 17:59:58 ⌀ | comments stringlengths 2 417 ⌀ | journal_ref stringclasses 321
values | doi stringclasses 398
values | ss_title stringlengths 8 159 ⌀ | ss_authors stringlengths 11 8.38k ⌀ | ss_year float64 2.02k 2.03k ⌀ | ss_venue stringclasses 281
values | ss_citationCount float64 0 134k ⌀ | ss_referenceCount float64 0 429 ⌀ | ss_fieldsOfStudy stringclasses 47
values |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2,501.04693 | Beyond Sight: Finetuning Generalist Robot Policies with Heterogeneous
Sensors via Language Grounding | ['Joshua Jones', 'Oier Mees', 'Carmelo Sferrazza', 'Kyle Stachowicz', 'Pieter Abbeel', 'Sergey Levine'] | ['cs.RO', 'cs.AI'] | Interacting with the world is a multi-sensory experience: achieving effective
general-purpose interaction requires making use of all available modalities --
including vision, touch, and audio -- to fill in gaps from partial observation.
For example, when vision is occluded reaching into a bag, a robot should rely
on it... | 2025-01-08T18:57:33Z | null | null | null | null | null | null | null | null | null | null |
2,501.04697 | Grokking at the Edge of Numerical Stability | ['Lucas Prieto', 'Melih Barsbey', 'Pedro A. M. Mediano', 'Tolga Birdal'] | ['cs.LG', 'cs.AI', 'cs.CV', 'stat.ML'] | Grokking, the sudden generalization that occurs after prolonged overfitting,
is a surprising phenomenon challenging our understanding of deep learning.
Although significant progress has been made in understanding grokking, the
reasons behind the delayed generalization and its dependence on regularization
remain unclear... | 2025-01-08T18:58:48Z | null | null | null | null | null | null | null | null | null | null |
2,501.04828 | Building Foundations for Natural Language Processing of Historical
Turkish: Resources and Models | ['Şaziye Betül Özateş', 'Tarık Emre Tıraş', 'Ece Elif Adak', 'Berat Doğan', 'Fatih Burak Karagöz', 'Efe Eren Genç', 'Esma F. Bilgin Taşdemir'] | ['cs.CL'] | This paper introduces foundational resources and models for natural language
processing (NLP) of historical Turkish, a domain that has remained
underexplored in computational linguistics. We present the first named entity
recognition (NER) dataset, HisTR and the first Universal Dependencies treebank,
OTA-BOUN for a his... | 2025-01-08T20:29:00Z | null | null | null | Building Foundations for Natural Language Processing of Historical Turkish: Resources and Models | ['S. Özates', 'Tarik Emre Tiras', 'Ece Elif Adak', 'Berat Dogan', 'F. Karagöz', 'Efe Eren Genç', 'Esma F. Bilgin Tasdemir'] | 2,025 | arXiv.org | 1 | 0 | ['Computer Science'] |
2,501.04858 | Advancing Retrieval-Augmented Generation for Persian: Development of
Language Models, Comprehensive Benchmarks, and Best Practices for
Optimization | ['Sara Bourbour Hosseinbeigi', 'Sina Asghari', 'Mohammad Ali Seif Kashani', 'Mohammad Hossein Shalchian', 'Mohammad Amin Abbasi'] | ['cs.CL'] | This paper examines the specific obstacles of constructing
Retrieval-Augmented Generation(RAG) systems in low-resource languages, with a
focus on Persian's complicated morphology and versatile syntax. The research
aims to improve retrieval and generation accuracy by introducing
Persian-specific models, namely MatinaRob... | 2025-01-08T22:16:40Z | null | null | null | Advancing Retrieval-Augmented Generation for Persian: Development of Language Models, Comprehensive Benchmarks, and Best Practices for Optimization | ['Sara Bourbour Hosseinbeigi', 'Sina Asghari', 'Mohammad Ali Seif Kashani', 'Mohammad Hossein Shalchian', 'Mohammad Amin Abbasi'] | 2,025 | arXiv.org | 0 | 0 | ['Computer Science'] |
2,501.05032 | Enhancing Human-Like Responses in Large Language Models | ['Ethem Yağız Çalık', 'Talha Rüzgar Akkuş'] | ['cs.CL', 'cs.AI'] | This paper explores the advancements in making large language models (LLMs)
more human-like. We focus on techniques that enhance natural language
understanding, conversational coherence, and emotional intelligence in AI
systems. The study evaluates various approaches, including fine-tuning with
diverse datasets, incorp... | 2025-01-09T07:44:06Z | null | null | null | Enhancing Human-Like Responses in Large Language Models | ['Ethem Yagiz Çalik', 'Talha Rüzgar Akkus'] | 2,025 | arXiv.org | 0 | 0 | ['Computer Science'] |
2,501.0504 | SWE-Fixer: Training Open-Source LLMs for Effective and Efficient GitHub
Issue Resolution | ['Chengxing Xie', 'Bowen Li', 'Chang Gao', 'He Du', 'Wai Lam', 'Difan Zou', 'Kai Chen'] | ['cs.CL'] | Large Language Models (LLMs) have demonstrated remarkable proficiency across
a variety of complex tasks. One significant application of LLMs is in tackling
software engineering challenges, particularly in resolving real-world tasks on
GitHub by fixing code based on the issues reported by the users. However, many
curren... | 2025-01-09T07:54:24Z | Our code, data, and model will be released at
https://github.com/InternLM/SWE-Fixer | null | null | null | null | null | null | null | null | null |
2,501.05122 | Centurio: On Drivers of Multilingual Ability of Large Vision-Language
Model | ['Gregor Geigle', 'Florian Schneider', 'Carolin Holtermann', 'Chris Biemann', 'Radu Timofte', 'Anne Lauscher', 'Goran Glavaš'] | ['cs.CL', 'cs.CV'] | Most Large Vision-Language Models (LVLMs) to date are trained predominantly
on English data, which makes them struggle to understand non-English input and
fail to generate output in the desired target language. Existing efforts
mitigate these issues by adding multilingual training data, but do so in a
largely ad-hoc ma... | 2025-01-09T10:26:14Z | null | null | null | null | null | null | null | null | null | null |
2,501.05131 | 3DIS-FLUX: simple and efficient multi-instance generation with DiT
rendering | ['Dewei Zhou', 'Ji Xie', 'Zongxin Yang', 'Yi Yang'] | ['cs.CV'] | The growing demand for controllable outputs in text-to-image generation has
driven significant advancements in multi-instance generation (MIG), enabling
users to define both instance layouts and attributes. Currently, the
state-of-the-art methods in MIG are primarily adapter-based. However, these
methods necessitate re... | 2025-01-09T10:34:00Z | tech report | null | null | null | null | null | null | null | null | null |
2,501.05441 | The GAN is dead; long live the GAN! A Modern GAN Baseline | ['Yiwen Huang', 'Aaron Gokaslan', 'Volodymyr Kuleshov', 'James Tompkin'] | ['cs.LG', 'cs.CV'] | There is a widely-spread claim that GANs are difficult to train, and GAN
architectures in the literature are littered with empirical tricks. We provide
evidence against this claim and build a modern GAN baseline in a more
principled manner. First, we derive a well-behaved regularized relativistic GAN
loss that addresse... | 2025-01-09T18:53:06Z | Accepted to NeurIPS 2024. Code available at
https://github.com/brownvc/R3GAN/ | null | null | null | null | null | null | null | null | null |
2,501.05452 | ReFocus: Visual Editing as a Chain of Thought for Structured Image
Understanding | ['Xingyu Fu', 'Minqian Liu', 'Zhengyuan Yang', 'John Corring', 'Yijuan Lu', 'Jianwei Yang', 'Dan Roth', 'Dinei Florencio', 'Cha Zhang'] | ['cs.CV', 'cs.CL'] | Structured image understanding, such as interpreting tables and charts,
requires strategically refocusing across various structures and texts within an
image, forming a reasoning sequence to arrive at the final answer. However,
current multimodal large language models (LLMs) lack this multihop selective
attention capab... | 2025-01-09T18:59:58Z | Project link: https://zeyofu.github.io/ReFocus/ | null | null | null | null | null | null | null | null | null |
2,501.05586 | FreeSVC: Towards Zero-shot Multilingual Singing Voice Conversion | ['Alef Iury Siqueira Ferreira', 'Lucas Rafael Gris', 'Augusto Seben da Rosa', 'Frederico Santos de Oliveira', 'Edresson Casanova', 'Rafael Teixeira Sousa', 'Arnaldo Candido Junior', 'Anderson da Silva Soares', 'Arlindo Galvão Filho'] | ['cs.SD', 'eess.AS'] | This work presents FreeSVC, a promising multilingual singing voice conversion
approach that leverages an enhanced VITS model with Speaker-invariant
Clustering (SPIN) for better content representation and the State-of-the-Art
(SOTA) speaker encoder ECAPA2. FreeSVC incorporates trainable language
embeddings to handle mul... | 2025-01-09T21:39:09Z | null | null | 10.1109/ICASSP49660.2025.10890068 | null | null | null | null | null | null | null |
2,501.05648 | Improving AI weather prediction models using global mass and energy
conservation schemes | ['Yingkai Sha', 'John S. Schreck', 'William Chapman', 'David John Gagne II'] | ['physics.ao-ph'] | Artificial Intelligence (AI) weather prediction (AIWP) models are powerful
tools for medium-range forecasts but often lack physical consistency, leading
to outputs that violate conservation laws. This study introduces a set of novel
physics-based schemes designed to enforce the conservation of global dry air
mass, mois... | 2025-01-10T01:33:40Z | null | null | null | null | null | null | null | null | null | null |
2,501.05767 | Migician: Revealing the Magic of Free-Form Multi-Image Grounding in
Multimodal Large Language Models | ['You Li', 'Heyu Huang', 'Chi Chen', 'Kaiyu Huang', 'Chao Huang', 'Zonghao Guo', 'Zhiyuan Liu', 'Jinan Xu', 'Yuhua Li', 'Ruixuan Li', 'Maosong Sun'] | ['cs.CL', 'cs.AI', 'cs.CV'] | The recent advancement of Multimodal Large Language Models (MLLMs) has
significantly improved their fine-grained perception of single images and
general comprehension across multiple images. However, existing MLLMs still
face challenges in achieving precise grounding in complex multi-image
scenarios. To address this, w... | 2025-01-10T07:56:23Z | 21 pages, 8 figures | null | null | Migician: Revealing the Magic of Free-Form Multi-Image Grounding in Multimodal Large Language Models | ['You Li', 'Heyu Huang', 'Chi Chen', 'Kaiyu Huang', 'Chao Huang', 'Zonghao Guo', 'Zhiyuan Liu', 'Jinan Xu', 'Yuhua Li', 'Ruixuan Li', 'Maosong Sun'] | 2,025 | arXiv.org | 6 | 51 | ['Computer Science'] |
2,501.05901 | Valley2: Exploring Multimodal Models with Scalable Vision-Language
Design | ['Ziheng Wu', 'Zhenghao Chen', 'Ruipu Luo', 'Can Zhang', 'Yuan Gao', 'Zhentao He', 'Xian Wang', 'Haoran Lin', 'Minghui Qiu'] | ['cs.CV'] | Recently, vision-language models have made remarkable progress, demonstrating
outstanding capabilities in various tasks such as image captioning and video
understanding. We introduce Valley2, a novel multimodal large language model
designed to enhance performance across all domains and extend the boundaries of
practica... | 2025-01-10T11:53:46Z | null | null | null | Valley2: Exploring Multimodal Models with Scalable Vision-Language Design | ['Ziheng Wu', 'Zhenghao Chen', 'Ruipu Luo', 'Can Zhang', 'Yuan Gao', 'Zhentao He', 'Xian Wang', 'Haoran Lin', 'Minghui Qiu'] | 2,025 | arXiv.org | 8 | 85 | ['Computer Science'] |
2,501.05932 | DiffuSETS: 12-lead ECG Generation Conditioned on Clinical Text Reports
and Patient-Specific Information | ['Yongfan Lai', 'Jiabo Chen', 'Deyun Zhang', 'Yue Wang', 'Shijia Geng', 'Hongyan Li', 'Shenda Hong'] | ['cs.LG', 'cs.AI'] | Heart disease remains a significant threat to human health. As a non-invasive
diagnostic tool, the electrocardiogram (ECG) is one of the most widely used
methods for cardiac screening. However, the scarcity of high-quality ECG data,
driven by privacy concerns and limited medical resources, creates a pressing
need for e... | 2025-01-10T12:55:34Z | null | null | null | DiffuSETS: 12-lead ECG Generation Conditioned on Clinical Text Reports and Patient-Specific Information | ['Yongfan Lai', 'Jiabo Chen', 'Deyun Zhang', 'Yue Wang', 'Shijia Geng', 'Hongyan Li', 'Shenda Hong'] | 2,025 | arXiv.org | 2 | 0 | ['Computer Science'] |
2,501.05952 | Scalable Vision Language Model Training via High Quality Data Curation | ['Hongyuan Dong', 'Zijian Kang', 'Weijie Yin', 'Xiao Liang', 'Chao Feng', 'Jiao Ran'] | ['cs.CV', 'cs.CL'] | In this paper, we introduce SAIL-VL (ScAlable Vision Language Model TraIning
via High QuaLity Data Curation), an open-source vision language model (VLM)
series achieving state-of-the-art (SOTA) performance in 2B and 8B parameters.
The following three key improvements contribute to SAIL-VL's leading
performance: (1) Sca... | 2025-01-10T13:27:04Z | ACL 2025 Main Conference | null | null | null | null | null | null | null | null | null |
2,501.06186 | LlamaV-o1: Rethinking Step-by-step Visual Reasoning in LLMs | ['Omkar Thawakar', 'Dinura Dissanayake', 'Ketan More', 'Ritesh Thawkar', 'Ahmed Heakl', 'Noor Ahsan', 'Yuhao Li', 'Mohammed Zumri', 'Jean Lahoud', 'Rao Muhammad Anwer', 'Hisham Cholakkal', 'Ivan Laptev', 'Mubarak Shah', 'Fahad Shahbaz Khan', 'Salman Khan'] | ['cs.CV'] | Reasoning is a fundamental capability for solving complex multi-step
problems, particularly in visual contexts where sequential step-wise
understanding is essential. Existing approaches lack a comprehensive framework
for evaluating visual reasoning and do not emphasize step-wise problem-solving.
To this end, we propose... | 2025-01-10T18:59:51Z | 15 pages, 5 Figures | null | null | LlamaV-o1: Rethinking Step-by-step Visual Reasoning in LLMs | ['Omkar Thawakar', 'Dinura Dissanayake', 'Ketan More', 'Ritesh Thawkar', 'Ahmed Heakl', 'Noor Ahsan', 'Yuhao Li', 'Mohammed Zumri', 'Jean Lahoud', 'R. Anwer', 'Hisham Cholakkal', 'Ivan Laptev', 'Mubarak Shah', 'F. Khan', 'Salman H. Khan'] | 2,025 | arXiv.org | 58 | 0 | ['Computer Science'] |
2,501.0623 | BEN: Using Confidence-Guided Matting for Dichotomous Image Segmentation | ['Maxwell Meyer', 'Jack Spruyt'] | ['cs.CV', 'eess.IV'] | Current approaches to dichotomous image segmentation (DIS) treat image
matting and object segmentation as fundamentally different tasks. As
improvements in image segmentation become increasingly challenging to achieve,
combining image matting and grayscale segmentation techniques offers promising
new directions for arc... | 2025-01-08T01:30:11Z | 13 pages, 2 figures, 2 tables, and 2 algorithms | null | null | BEN: Using Confidence-Guided Matting for Dichotomous Image Segmentation | ['Maxwell Meyer', 'Jack Spruyt'] | 2,025 | arXiv.org | 0 | 25 | ['Computer Science', 'Engineering'] |
2,501.06425 | Tensor Product Attention Is All You Need | ['Yifan Zhang', 'Yifeng Liu', 'Huizhuo Yuan', 'Zhen Qin', 'Yang Yuan', 'Quanquan Gu', 'Andrew C Yao'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Scaling language models to handle longer input sequences typically
necessitates large key-value (KV) caches, resulting in substantial memory
overhead during inference. In this paper, we propose Tensor Product Attention
(TPA), a novel attention mechanism that uses tensor decompositions to represent
queries, keys, and va... | 2025-01-11T03:37:10Z | 52 pages, 11 figures | null | null | null | null | null | null | null | null | null |
2,501.06598 | ChartCoder: Advancing Multimodal Large Language Model for Chart-to-Code
Generation | ['Xuanle Zhao', 'Xianzhen Luo', 'Qi Shi', 'Chi Chen', 'Shuo Wang', 'Zhiyuan Liu', 'Maosong Sun'] | ['cs.AI'] | Multimodal Large Language Models (MLLMs) have demonstrated remarkable
capabilities in chart understanding tasks. However, interpreting charts with
textual descriptions often leads to information loss, as it fails to fully
capture the dense information embedded in charts. In contrast, parsing charts
into code provides l... | 2025-01-11T17:52:22Z | Accepted by ACL 2025 Main, Camera Ready | null | null | ChartCoder: Advancing Multimodal Large Language Model for Chart-to-Code Generation | ['Xuanle Zhao', 'Xianzhen Luo', 'Qi Shi', 'Chi Chen', 'Shuo Wang', 'Wanxiang Che', 'Zhiyuan Liu', 'Maosong Sun'] | 2,025 | arXiv.org | 12 | 50 | ['Computer Science'] |
2,501.06828 | GeoPix: Multi-Modal Large Language Model for Pixel-level Image
Understanding in Remote Sensing | ['Ruizhe Ou', 'Yuan Hu', 'Fan Zhang', 'Jiaxin Chen', 'Yu Liu'] | ['cs.CV'] | Multi-modal large language models (MLLMs) have achieved remarkable success in
image- and region-level remote sensing (RS) image understanding tasks, such as
image captioning, visual question answering, and visual grounding. However,
existing RS MLLMs lack the pixel-level dialogue capability, which involves
responding t... | 2025-01-12T14:45:27Z | null | null | null | GeoPix: Multi-Modal Large Language Model for Pixel-level Image Understanding in Remote Sensing | ['Ruizhe Ou', 'Yuan Hu', 'Fan Zhang', 'Jiaxin Chen', 'Yu Liu'] | 2,025 | arXiv.org | 3 | 55 | ['Computer Science'] |
2,501.07246 | Audio-CoT: Exploring Chain-of-Thought Reasoning in Large Audio Language
Model | ['Ziyang Ma', 'Zhuo Chen', 'Yuping Wang', 'Eng Siong Chng', 'Xie Chen'] | ['cs.SD', 'cs.CL', 'cs.MM', 'eess.AS'] | Large Audio-Language Models (LALMs) have demonstrated remarkable performance
in tasks involving audio perception and understanding, such as speech
recognition and audio captioning. However, their reasoning capabilities -
critical for solving complex real-world problems - remain underexplored. In
this work, we conduct t... | 2025-01-13T11:54:40Z | null | null | null | null | null | null | null | null | null | null |
2,501.07256 | EdgeTAM: On-Device Track Anything Model | ['Chong Zhou', 'Chenchen Zhu', 'Yunyang Xiong', 'Saksham Suri', 'Fanyi Xiao', 'Lemeng Wu', 'Raghuraman Krishnamoorthi', 'Bo Dai', 'Chen Change Loy', 'Vikas Chandra', 'Bilge Soran'] | ['cs.CV'] | On top of Segment Anything Model (SAM), SAM 2 further extends its capability
from image to video inputs through a memory bank mechanism and obtains a
remarkable performance compared with previous methods, making it a foundation
model for video segmentation task. In this paper, we aim at making SAM 2 much
more efficient... | 2025-01-13T12:11:07Z | Code will be released at https://github.com/facebookresearch/EdgeTAM | null | null | null | null | null | null | null | null | null |
2,501.073 | Comparative analysis of optical character recognition methods for Sámi
texts from the National Library of Norway | ['Tita Enstad', 'Trond Trosterud', 'Marie Iversdatter Røsok', 'Yngvil Beyer', 'Marie Roald'] | ['cs.CL', 'cs.CV'] | Optical Character Recognition (OCR) is crucial to the National Library of
Norway's (NLN) digitisation process as it converts scanned documents into
machine-readable text. However, for the S\'ami documents in NLN's collection,
the OCR accuracy is insufficient. Given that OCR quality affects downstream
processes, evaluat... | 2025-01-13T13:07:51Z | To be published in Proceedings of the 25th Nordic Conference on
Computational Linguistics (NoDaLiDa) | null | null | Comparative analysis of optical character recognition methods for Sámi texts from the National Library of Norway | ['Tita Enstad', 'Trond Trosterud', 'Marie Iversdatter Røsok', 'Yngvil Beyer', 'Marie Roald'] | 2,025 | arXiv.org | 0 | 0 | ['Computer Science'] |
2,501.07301 | The Lessons of Developing Process Reward Models in Mathematical
Reasoning | ['Zhenru Zhang', 'Chujie Zheng', 'Yangzhen Wu', 'Beichen Zhang', 'Runji Lin', 'Bowen Yu', 'Dayiheng Liu', 'Jingren Zhou', 'Junyang Lin'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Process Reward Models (PRMs) emerge as a promising approach for process
supervision in mathematical reasoning of Large Language Models (LLMs), which
aim to identify and mitigate intermediate errors in the reasoning processes.
However, the development of effective PRMs faces significant challenges,
particularly in data ... | 2025-01-13T13:10:16Z | null | null | null | The Lessons of Developing Process Reward Models in Mathematical Reasoning | ['Zhenru Zhang', 'Chujie Zheng', 'Yangzhen Wu', 'Beichen Zhang', 'Runji Lin', 'Bowen Yu', 'Dayiheng Liu', 'Jingren Zhou', 'Junyang Lin'] | 2,025 | arXiv.org | 114 | 33 | ['Computer Science'] |
2,501.07314 | FinerWeb-10BT: Refining Web Data with LLM-Based Line-Level Filtering | ['Erik Henriksson', 'Otto Tarkka', 'Filip Ginter'] | ['cs.CL'] | Data quality is crucial for training Large Language Models (LLMs).
Traditional heuristic filters often miss low-quality text or mistakenly remove
valuable content. In this paper, we introduce an LLM-based line-level filtering
method to enhance training data quality. We use GPT-4o mini to label a
20,000-document sample ... | 2025-01-13T13:26:50Z | 11 pages, 4 figures, 4 tables. To be published in NoDaLiDa/Baltic-HLT
2025 proceedings | null | null | FinerWeb-10BT: Refining Web Data with LLM-Based Line-Level Filtering | ['Erik Henriksson', 'Otto Tarkka', 'Filip Ginter'] | 2,025 | arXiv.org | 1 | 0 | ['Computer Science'] |
2,501.07329 | Joint Automatic Speech Recognition And Structure Learning For Better
Speech Understanding | ['Jiliang Hu', 'Zuchao Li', 'Mengjia Shen', 'Haojun Ai', 'Sheng Li', 'Jun Zhang'] | ['cs.SD', 'cs.CL', 'eess.AS'] | Spoken language understanding (SLU) is a structure prediction task in the
field of speech. Recently, many works on SLU that treat it as a
sequence-to-sequence task have achieved great success. However, This method is
not suitable for simultaneous speech recognition and understanding. In this
paper, we propose a joint s... | 2025-01-13T13:43:46Z | 5 pages, 2 figures, accepted by ICASSP 2025 | null | null | null | null | null | null | null | null | null |
2,501.07542 | Imagine while Reasoning in Space: Multimodal Visualization-of-Thought | ['Chengzu Li', 'Wenshan Wu', 'Huanyu Zhang', 'Yan Xia', 'Shaoguang Mao', 'Li Dong', 'Ivan Vulić', 'Furu Wei'] | ['cs.CL', 'cs.CV', 'cs.LG'] | Chain-of-Thought (CoT) prompting has proven highly effective for enhancing
complex reasoning in Large Language Models (LLMs) and Multimodal Large Language
Models (MLLMs). Yet, it struggles in complex spatial reasoning tasks.
Nonetheless, human cognition extends beyond language alone, enabling the
remarkable capability ... | 2025-01-13T18:23:57Z | 11 pages, 6 figures, 4 tables (27 pages, 10 figures, 16 tables
including references and appendices) | null | null | Imagine while Reasoning in Space: Multimodal Visualization-of-Thought | ['Chengzu Li', 'Wenshan Wu', 'Huanyu Zhang', 'Yan Xia', 'Shaoguang Mao', 'Li Dong', "Ivan Vuli'c", 'Furu Wei'] | 2,025 | arXiv.org | 40 | 0 | ['Computer Science'] |
2,501.07721 | LLMic: Romanian Foundation Language Model | ['Vlad-Andrei Bădoiu', 'Mihai-Valentin Dumitru', 'Alexandru M. Gherghescu', 'Alexandru Agache', 'Costin Raiciu'] | ['cs.CL'] | Recent advances in Large Language Models (LLMs) have demonstrated remarkable
capabilities across various tasks with commercial models leading the way. While
open models usually operate at a smaller scale, they maintain competitiveness
through specialization and fine-tuning. However, a significant challenge
persists: op... | 2025-01-13T22:14:45Z | null | null | null | LLMic: Romanian Foundation Language Model | ['Vlad-Andrei Bădoiu', 'Mihai-Valentin Dumitru', 'Alexandru M. Gherghescu', 'Alexandru Agache', 'C. Raiciu'] | 2,025 | arXiv.org | 0 | 0 | ['Computer Science'] |
2,501.0773 | Democratizing Text-to-Image Masked Generative Models with Compact
Text-Aware One-Dimensional Tokens | ['Dongwon Kim', 'Ju He', 'Qihang Yu', 'Chenglin Yang', 'Xiaohui Shen', 'Suha Kwak', 'Liang-Chieh Chen'] | ['cs.CV'] | Image tokenizers form the foundation of modern text-to-image generative
models but are notoriously difficult to train. Furthermore, most existing
text-to-image models rely on large-scale, high-quality private datasets, making
them challenging to replicate. In this work, we introduce Text-Aware
Transformer-based 1-Dimen... | 2025-01-13T22:37:17Z | Project page at https://tacju.github.io/projects/maskgen.html | null | null | Democratizing Text-to-Image Masked Generative Models with Compact Text-Aware One-Dimensional Tokens | ['Dongwon Kim', 'Ju He', 'Qihang Yu', 'Chenglin Yang', 'Xiaohui Shen', 'Suha Kwak', 'Liang-Chieh Chen'] | 2,025 | arXiv.org | 11 | 65 | ['Computer Science'] |
2,501.07783 | Parameter-Inverted Image Pyramid Networks for Visual Perception and
Multimodal Understanding | ['Zhaokai Wang', 'Xizhou Zhu', 'Xue Yang', 'Gen Luo', 'Hao Li', 'Changyao Tian', 'Wenhan Dou', 'Junqi Ge', 'Lewei Lu', 'Yu Qiao', 'Jifeng Dai'] | ['cs.CV', 'cs.CL'] | Image pyramids are widely adopted in top-performing methods to obtain
multi-scale features for precise visual perception and understanding. However,
current image pyramids use the same large-scale model to process multiple
resolutions of images, leading to significant computational cost. To address
this challenge, we p... | 2025-01-14T01:57:41Z | null | null | null | Parameter-Inverted Image Pyramid Networks for Visual Perception and Multimodal Understanding | ['Zhaokai Wang', 'Xizhou Zhu', 'Xue Yang', 'Gen Luo', 'Hao Li', 'Changyao Tian', 'Wenhan Dou', 'Junqi Ge', 'Lewei Lu', 'Yu Qiao', 'Jifeng Dai'] | 2,025 | arXiv.org | 5 | 0 | ['Computer Science'] |
2,501.07888 | Tarsier2: Advancing Large Vision-Language Models from Detailed Video
Description to Comprehensive Video Understanding | ['Liping Yuan', 'Jiawei Wang', 'Haomiao Sun', 'Yuchen Zhang', 'Yuan Lin'] | ['cs.CV', 'cs.AI'] | We introduce Tarsier2, a state-of-the-art large vision-language model (LVLM)
designed for generating detailed and accurate video descriptions, while also
exhibiting superior general video understanding capabilities. Tarsier2 achieves
significant advancements through three key upgrades: (1) Scaling pre-training
data fro... | 2025-01-14T06:54:39Z | null | null | null | Tarsier2: Advancing Large Vision-Language Models from Detailed Video Description to Comprehensive Video Understanding | ['Liping Yuan', 'Jiawei Wang', 'Haomiao Sun', 'Yuchen Zhang', 'Yuan Lin'] | 2,025 | arXiv.org | 13 | 0 | ['Computer Science'] |
2,501.0812 | In-situ graph reasoning and knowledge expansion using Graph-PReFLexOR | ['Markus J. Buehler'] | ['cs.AI', 'cond-mat.dis-nn', 'cond-mat.mtrl-sci', 'cs.CL'] | The pursuit of automated scientific discovery has fueled progress from
symbolic logic to modern AI, forging new frontiers in reasoning and pattern
recognition. Transformers function as potential systems, where every possible
relationship remains latent potentiality until tasks impose constraints, akin
to measurement. Y... | 2025-01-14T13:52:41Z | null | null | null | null | null | null | null | null | null | null |
2,501.08187 | A Multi-Modal AI Copilot for Single-Cell Analysis with Instruction
Following | ['Yin Fang', 'Xinle Deng', 'Kangwei Liu', 'Ningyu Zhang', 'Jingyang Qian', 'Penghui Yang', 'Xiaohui Fan', 'Huajun Chen'] | ['cs.CL', 'cs.AI', 'cs.CE', 'cs.HC', 'cs.LG', 'q-bio.CB'] | Large language models excel at interpreting complex natural language
instructions, enabling them to perform a wide range of tasks. In the life
sciences, single-cell RNA sequencing (scRNA-seq) data serves as the "language
of cellular biology", capturing intricate gene expression patterns at the
single-cell level. Howeve... | 2025-01-14T15:12:19Z | 37 pages; 13 figures; Code: https://github.com/zjunlp/Instructcell,
Models: https://huggingface.co/zjunlp/Instructcell-chat,
https://huggingface.co/zjunlp/InstructCell-instruct | null | null | null | null | null | null | null | null | null |
2,501.08225 | FramePainter: Endowing Interactive Image Editing with Video Diffusion
Priors | ['Yabo Zhang', 'Xinpeng Zhou', 'Yihan Zeng', 'Hang Xu', 'Hui Li', 'Wangmeng Zuo'] | ['cs.CV'] | Interactive image editing allows users to modify images through visual
interaction operations such as drawing, clicking, and dragging. Existing
methods construct such supervision signals from videos, as they capture how
objects change with various physical interactions. However, these models are
usually built upon text... | 2025-01-14T16:09:16Z | Code: https://github.com/YBYBZhang/FramePainter | null | null | FramePainter: Endowing Interactive Image Editing with Video Diffusion Priors | ['Yabo Zhang', 'Xinpeng Zhou', 'Yihan Zeng', 'Hang Xu', 'Hui Li', 'Wangmeng Zuo'] | 2,025 | arXiv.org | 4 | 0 | ['Computer Science'] |
2,501.08295 | LayerAnimate: Layer-level Control for Animation | ['Yuxue Yang', 'Lue Fan', 'Zuzeng Lin', 'Feng Wang', 'Zhaoxiang Zhang'] | ['cs.CV'] | Traditional animation production decomposes visual elements into discrete
layers to enable independent processing for sketching, refining, coloring, and
in-betweening. Existing anime generation video methods typically treat
animation as a distinct data domain different from real-world videos, lacking
fine-grained contr... | 2025-01-14T18:22:21Z | Project page: https://layeranimate.github.io | null | null | null | null | null | null | null | null | null |
2,501.08303 | Advancing Semantic Future Prediction through Multimodal Visual Sequence
Transformers | ['Efstathios Karypidis', 'Ioannis Kakogeorgiou', 'Spyros Gidaris', 'Nikos Komodakis'] | ['cs.CV'] | Semantic future prediction is important for autonomous systems navigating
dynamic environments. This paper introduces FUTURIST, a method for multimodal
future semantic prediction that uses a unified and efficient visual sequence
transformer architecture. Our approach incorporates a multimodal masked visual
modeling obj... | 2025-01-14T18:34:14Z | null | null | null | Advancing Semantic Future Prediction through Multimodal Visual Sequence Transformers | ['Efstathios Karypidis', 'Ioannis Kakogeorgiou', 'Spyros Gidaris', 'Nikos Komodakis'] | 2,025 | Computer Vision and Pattern Recognition | 2 | 0 | ['Computer Science'] |
2,501.08313 | MiniMax-01: Scaling Foundation Models with Lightning Attention | ['MiniMax', 'Aonian Li', 'Bangwei Gong', 'Bo Yang', 'Boji Shan', 'Chang Liu', 'Cheng Zhu', 'Chunhao Zhang', 'Congchao Guo', 'Da Chen', 'Dong Li', 'Enwei Jiao', 'Gengxin Li', 'Guojun Zhang', 'Haohai Sun', 'Houze Dong', 'Jiadai Zhu', 'Jiaqi Zhuang', 'Jiayuan Song', 'Jin Zhu', 'Jingtao Han', 'Jingyang Li', 'Junbin Xie', '... | ['cs.CL', 'cs.CV'] | We introduce MiniMax-01 series, including MiniMax-Text-01 and MiniMax-VL-01,
which are comparable to top-tier models while offering superior capabilities in
processing longer contexts. The core lies in lightning attention and its
efficient scaling. To maximize computational capacity, we integrate it with
Mixture of Exp... | 2025-01-14T18:50:05Z | A technical report from MiniMax. The authors are listed in
alphabetical order. We open-sourced our MiniMax-01 at
https://github.com/MiniMax-AI | null | null | null | null | null | null | null | null | null |
2,501.08335 | MERaLiON-TextLLM: Cross-Lingual Understanding of Large Language Models
in Chinese, Indonesian, Malay, and Singlish | ['Xin Huang', 'Tarun Kumar Vangani', 'Minh Duc Pham', 'Xunlong Zou', 'Bin Wang', 'Zhengyuan Liu', 'Ai Ti Aw'] | ['cs.CL', 'cs.AI'] | Multilingual large language models (MLLMs) have shown impressive capabilities
across a variety of languages. However, efficacy can differ greatly between
different language families, especially for those with limited linguistic
resources. This report presents MERaLiON-TextLLM, a series of open-source
language models sp... | 2024-12-21T05:50:48Z | null | null | null | MERaLiON-TextLLM: Cross-Lingual Understanding of Large Language Models in Chinese, Indonesian, Malay, and Singlish | ['Xin Huang', 'T. K. Vangani', 'Minh Duc Pham', 'Xunlong Zou', 'Bin Wang', 'Zhengyuan Liu', 'AiTi Aw'] | 2,024 | arXiv.org | 2 | 8 | ['Computer Science'] |
2,501.08453 | Vchitect-2.0: Parallel Transformer for Scaling Up Video Diffusion Models | ['Weichen Fan', 'Chenyang Si', 'Junhao Song', 'Zhenyu Yang', 'Yinan He', 'Long Zhuo', 'Ziqi Huang', 'Ziyue Dong', 'Jingwen He', 'Dongwei Pan', 'Yi Wang', 'Yuming Jiang', 'Yaohui Wang', 'Peng Gao', 'Xinyuan Chen', 'Hengjie Li', 'Dahua Lin', 'Yu Qiao', 'Ziwei Liu'] | ['cs.CV', 'cs.LG'] | We present Vchitect-2.0, a parallel transformer architecture designed to
scale up video diffusion models for large-scale text-to-video generation. The
overall Vchitect-2.0 system has several key designs. (1) By introducing a novel
Multimodal Diffusion Block, our approach achieves consistent alignment between
text descr... | 2025-01-14T21:53:11Z | null | null | null | null | null | null | null | null | null | null |
2,501.08549 | The Devil is in Temporal Token: High Quality Video Reasoning
Segmentation | ['Sitong Gong', 'Yunzhi Zhuge', 'Lu Zhang', 'Zongxin Yang', 'Pingping Zhang', 'Huchuan Lu'] | ['cs.CV', 'cs.AI'] | Existing methods for Video Reasoning Segmentation rely heavily on a single
special token to represent the object in the keyframe or the entire video,
inadequately capturing spatial complexity and inter-frame motion. To overcome
these challenges, we propose VRS-HQ, an end-to-end video reasoning segmentation
approach tha... | 2025-01-15T03:17:24Z | null | CVPR 2025 | null | null | null | null | null | null | null | null |
2,501.0858 | Densely Connected Parameter-Efficient Tuning for Referring Image
Segmentation | ['Jiaqi Huang', 'Zunnan Xu', 'Ting Liu', 'Yong Liu', 'Haonan Han', 'Kehong Yuan', 'Xiu Li'] | ['cs.CV'] | In the domain of computer vision, Parameter-Efficient Tuning (PET) is
increasingly replacing the traditional paradigm of pre-training followed by
full fine-tuning. PET is particularly favored for its effectiveness in large
foundation models, as it streamlines transfer learning costs and optimizes
hardware utilization. ... | 2025-01-15T05:00:03Z | Accepted by AAAI2025 | null | null | null | null | null | null | null | null | null |
2,501.08617 | RLHS: Mitigating Misalignment in RLHF with Hindsight Simulation | ['Kaiqu Liang', 'Haimin Hu', 'Ryan Liu', 'Thomas L. Griffiths', 'Jaime Fernández Fisac'] | ['cs.LG', 'cs.AI', 'cs.CL'] | While Reinforcement Learning from Human Feedback (RLHF) has shown promise in
aligning generative AI, we present empirical evidence that it can also cause
severe, systematic misalignment. We hypothesize that this stems from evaluator
feedback depending on downstream outcome predictions (foresight) that can be
influenced... | 2025-01-15T06:33:15Z | 27 pages, 18 figures | null | null | RLHS: Mitigating Misalignment in RLHF with Hindsight Simulation | ['Kaiqu Liang', 'Haimin Hu', 'Ryan Liu', 'Thomas L. Griffiths', 'J. F. Fisac'] | 2,025 | arXiv.org | 4 | 74 | ['Computer Science'] |
2,501.08828 | MMDocIR: Benchmarking Multi-Modal Retrieval for Long Documents | ['Kuicai Dong', 'Yujing Chang', 'Xin Deik Goh', 'Dexun Li', 'Ruiming Tang', 'Yong Liu'] | ['cs.IR', 'cs.AI', 'cs.CL', 'cs.CV'] | Multimodal document retrieval aims to identify and retrieve various forms of
multimodal content, such as figures, tables, charts, and layout information
from extensive documents. Despite its increasing popularity, there is a notable
lack of a comprehensive and robust benchmark to effectively evaluate the
performance of... | 2025-01-15T14:30:13Z | https://huggingface.co/MMDocIR | null | null | MMDocIR: Benchmarking Multi-Modal Retrieval for Long Documents | ['Kuicai Dong', 'Yujing Chang', 'Derrick-Goh-Xin Deik', 'Dexun Li', 'Ruiming Tang', 'Yong Liu'] | 2,025 | arXiv.org | 7 | 54 | ['Computer Science'] |
2,501.08994 | RepVideo: Rethinking Cross-Layer Representation for Video Generation | ['Chenyang Si', 'Weichen Fan', 'Zhengyao Lv', 'Ziqi Huang', 'Yu Qiao', 'Ziwei Liu'] | ['cs.CV'] | Video generation has achieved remarkable progress with the introduction of
diffusion models, which have significantly improved the quality of generated
videos. However, recent research has primarily focused on scaling up model
training, while offering limited insights into the direct impact of
representations on the vi... | 2025-01-15T18:20:37Z | Project page: https://vchitect.github.io/RepVid-Webpage | null | null | RepVideo: Rethinking Cross-Layer Representation for Video Generation | ['Chenyang Si', 'Weichen Fan', 'Zhengyao Lv', 'Ziqi Huang', 'Yu Qiao', 'Ziwei Liu'] | 2,025 | arXiv.org | 4 | 0 | ['Computer Science'] |
2,501.09213 | FineMedLM-o1: Enhancing the Medical Reasoning Ability of LLM from
Supervised Fine-Tuning to Test-Time Training | ['Hongzhou Yu', 'Tianhao Cheng', 'Ying Cheng', 'Rui Feng'] | ['cs.CL'] | Recent advancements in large language models (LLMs) have shown promise in
medical applications such as disease diagnosis and treatment planning. However,
most existing medical LLMs struggle with the advanced reasoning required for
complex clinical scenarios, such as differential diagnosis or personalized
treatment sugg... | 2025-01-16T00:19:19Z | null | null | null | null | null | null | null | null | null | null |
2,501.09446 | Double Visual Defense: Adversarial Pre-training and Instruction Tuning
for Improving Vision-Language Model Robustness | ['Zeyu Wang', 'Cihang Xie', 'Brian Bartoldson', 'Bhavya Kailkhura'] | ['cs.CV'] | This paper investigates the robustness of vision-language models against
adversarial visual perturbations and introduces a novel ``double visual
defense" to enhance this robustness. Unlike previous approaches that resort to
lightweight adversarial fine-tuning of a pre-trained CLIP model, we perform
large-scale adversar... | 2025-01-16T10:20:48Z | null | null | null | null | null | null | null | null | null | null |
2,501.09484 | Exploring the Inquiry-Diagnosis Relationship with Advanced Patient
Simulators | ['Zhaocheng Liu', 'Quan Tu', 'Wen Ye', 'Yu Xiao', 'Zhishou Zhang', 'Hengfu Cui', 'Yalun Zhu', 'Qiang Ju', 'Shizheng Li', 'Jian Xie'] | ['cs.CL'] | Recently, large language models have shown great potential to transform
online medical consultation. Despite this, most research targets improving
diagnostic accuracy with ample information, often overlooking the inquiry
phase. Some studies try to evaluate or refine doctor models by using
prompt-engineered patient agen... | 2025-01-16T11:41:14Z | null | null | null | null | null | null | null | null | null | null |
2,501.09503 | AnyStory: Towards Unified Single and Multiple Subject Personalization in
Text-to-Image Generation | ['Junjie He', 'Yuxiang Tuo', 'Binghui Chen', 'Chongyang Zhong', 'Yifeng Geng', 'Liefeng Bo'] | ['cs.CV'] | Recently, large-scale generative models have demonstrated outstanding
text-to-image generation capabilities. However, generating high-fidelity
personalized images with specific subjects still presents challenges,
especially in cases involving multiple subjects. In this paper, we propose
AnyStory, a unified approach for... | 2025-01-16T12:28:39Z | Tech report; Project page:
https://aigcdesigngroup.github.io/AnyStory/ | null | null | null | null | null | null | null | null | null |
2,501.0972 | A Simple Aerial Detection Baseline of Multimodal Language Models | ['Qingyun Li', 'Yushi Chen', 'Xinya Shu', 'Dong Chen', 'Xin He', 'Yi Yu', 'Xue Yang'] | ['cs.CV', 'cs.AI'] | The multimodal language models (MLMs) based on generative pre-trained
Transformer are considered powerful candidates for unifying various domains and
tasks. MLMs developed for remote sensing (RS) have demonstrated outstanding
performance in multiple tasks, such as visual question answering and visual
grounding. In addi... | 2025-01-16T18:09:22Z | 4 pages, 1 table, 4 figures | null | null | null | null | null | null | null | null | null |
2,501.09729 | Generating particle physics Lagrangians with transformers | ['Yong Sheng Koay', 'Rikard Enberg', 'Stefano Moretti', 'Eliel Camargo-Molina'] | ['cs.LG', 'cs.SC', 'hep-ph', 'hep-th'] | In physics, Lagrangians provide a systematic way to describe laws governing
physical systems. In the context of particle physics, they encode the
interactions and behavior of the fundamental building blocks of our universe.
By treating Lagrangians as complex, rule-based constructs similar to linguistic
expressions, we ... | 2025-01-16T18:25:50Z | 32 pages, 11 figues, 18 tables | null | null | Generating particle physics Lagrangians with transformers | ['Yong Sheng Koay', 'R. Enberg', 'Stefano Moretti', 'Eliel Camargo-Molina'] | 2,025 | arXiv.org | 0 | 7 | ['Computer Science', 'Physics'] |
2,501.09747 | FAST: Efficient Action Tokenization for Vision-Language-Action Models | ['Karl Pertsch', 'Kyle Stachowicz', 'Brian Ichter', 'Danny Driess', 'Suraj Nair', 'Quan Vuong', 'Oier Mees', 'Chelsea Finn', 'Sergey Levine'] | ['cs.RO', 'cs.LG'] | Autoregressive sequence models, such as Transformer-based vision-language
action (VLA) policies, can be tremendously effective for capturing complex and
generalizable robotic behaviors. However, such models require us to choose a
tokenization of our continuous action signals, which determines how the
discrete symbols p... | 2025-01-16T18:57:04Z | Website: https://www.pi.website/research/fast | null | null | null | null | null | null | null | null | null |
2,501.09749 | Enhancing Lexicon-Based Text Embeddings with Large Language Models | ['Yibin Lei', 'Tao Shen', 'Yu Cao', 'Andrew Yates'] | ['cs.CL', 'cs.IR'] | Recent large language models (LLMs) have demonstrated exceptional performance
on general-purpose text embedding tasks. While dense embeddings have dominated
related research, we introduce the first Lexicon-based EmbeddiNgS (LENS)
leveraging LLMs that achieve competitive performance on these tasks. Regarding
the inheren... | 2025-01-16T18:57:20Z | null | null | null | null | null | null | null | null | null | null |
2,501.09768 | Can Large Language Models Predict the Outcome of Judicial Decisions? | ['Mohamed Bayan Kmainasi', 'Ali Ezzat Shahroor', 'Amani Al-Ghraibah'] | ['cs.CL', 'cs.AI'] | Large Language Models (LLMs) have shown exceptional capabilities in Natural
Language Processing (NLP) across diverse domains. However, their application in
specialized tasks such as Legal Judgment Prediction (LJP) for low-resource
languages like Arabic remains underexplored. In this work, we address this gap
by develop... | 2025-01-15T11:32:35Z | null | null | null | null | null | null | null | null | null | null |
2,501.09781 | VideoWorld: Exploring Knowledge Learning from Unlabeled Videos | ['Zhongwei Ren', 'Yunchao Wei', 'Xun Guo', 'Yao Zhao', 'Bingyi Kang', 'Jiashi Feng', 'Xiaojie Jin'] | ['cs.CV'] | This work explores whether a deep generative model can learn complex
knowledge solely from visual input, in contrast to the prevalent focus on
text-based models like large language models (LLMs). We develop VideoWorld, an
auto-regressive video generation model trained on unlabeled video data, and
test its knowledge acq... | 2025-01-16T18:59:10Z | Code and models are released at:
https://maverickren.github.io/VideoWorld.github.io/ | null | null | VideoWorld: Exploring Knowledge Learning from Unlabeled Videos | ['Zhongwei Ren', 'Yunchao Wei', 'Xun Guo', 'Yao Zhao', 'Bingyi Kang', 'Jiashi Feng', 'Xiaojie Jin'] | 2,025 | arXiv.org | 15 | 66 | ['Computer Science'] |
2,501.09782 | SMPLest-X: Ultimate Scaling for Expressive Human Pose and Shape
Estimation | ['Wanqi Yin', 'Zhongang Cai', 'Ruisi Wang', 'Ailing Zeng', 'Chen Wei', 'Qingping Sun', 'Haiyi Mei', 'Yanjun Wang', 'Hui En Pang', 'Mingyuan Zhang', 'Lei Zhang', 'Chen Change Loy', 'Atsushi Yamashita', 'Lei Yang', 'Ziwei Liu'] | ['cs.CV', 'cs.GR', 'cs.HC', 'cs.MM', 'cs.RO'] | Expressive human pose and shape estimation (EHPS) unifies body, hands, and
face motion capture with numerous applications. Despite encouraging progress,
current state-of-the-art methods focus on training innovative architectural
designs on confined datasets. In this work, we investigate the impact of
scaling up EHPS to... | 2025-01-16T18:59:46Z | An extension of SMPLer-X [arXiv:2309.17448]. Homepage:
https://caizhongang.com/projects/SMPLer-X/ | null | null | SMPLest-X: Ultimate Scaling for Expressive Human Pose and Shape Estimation | ['Wanqi Yin', 'Zhongang Cai', 'Ruisi Wang', 'Ailing Zeng', 'Chen Wei', 'Qingping Sun', 'Haiyi Mei', 'Yanjun Wang', 'Hui En Pang', 'Mingyuan Zhang', 'Lei Zhang', 'Chen Change Loy', 'Atsushi Yamashita', 'Lei Yang', 'Ziwei Liu'] | 2,025 | arXiv.org | 3 | 0 | ['Computer Science'] |
2,501.10018 | DiffuEraser: A Diffusion Model for Video Inpainting | ['Xiaowen Li', 'Haolan Xue', 'Peiran Ren', 'Liefeng Bo'] | ['cs.CV'] | Recent video inpainting algorithms integrate flow-based pixel propagation
with transformer-based generation to leverage optical flow for restoring
textures and objects using information from neighboring frames, while
completing masked regions through visual Transformers. However, these
approaches often encounter blurri... | 2025-01-17T08:03:02Z | 11pages, 13figures | null | null | null | null | null | null | null | null | null |
2,501.10021 | X-Dyna: Expressive Dynamic Human Image Animation | ['Di Chang', 'Hongyi Xu', 'You Xie', 'Yipeng Gao', 'Zhengfei Kuang', 'Shengqu Cai', 'Chenxu Zhang', 'Guoxian Song', 'Chao Wang', 'Yichun Shi', 'Zeyuan Chen', 'Shijie Zhou', 'Linjie Luo', 'Gordon Wetzstein', 'Mohammad Soleymani'] | ['cs.CV'] | We introduce X-Dyna, a novel zero-shot, diffusion-based pipeline for
animating a single human image using facial expressions and body movements
derived from a driving video, that generates realistic, context-aware dynamics
for both the subject and the surrounding environment. Building on prior
approaches centered on hu... | 2025-01-17T08:10:53Z | Project page:https://x-dyna.github.io/xdyna.github.io/
Code:https://github.com/bytedance/X-Dyna
Model:https://huggingface.co/Boese0601/X-Dyna | null | null | X-Dyna: Expressive Dynamic Human Image Animation | ['Di Chang', 'Hongyi Xu', 'You Xie', 'Yipeng Gao', 'Zhengfei Kuang', 'Shengqu Cai', 'Chenxu Zhang', 'Guoxian Song', 'Chao Wang', 'Yichun Shi', 'Zeyuan Chen', 'Shijie Zhou', 'Linjie Luo', 'Gordon Wetzstein', 'Mohammad Soleymani'] | 2,025 | Computer Vision and Pattern Recognition | 6 | 0 | ['Computer Science'] |
2,501.10064 | One-D-Piece: Image Tokenizer Meets Quality-Controllable Compression | ['Keita Miwa', 'Kento Sasaki', 'Hidehisa Arai', 'Tsubasa Takahashi', 'Yu Yamaguchi'] | ['cs.CV', 'cs.LG'] | Current image tokenization methods require a large number of tokens to
capture the information contained within images. Although the amount of
information varies across images, most image tokenizers only support
fixed-length tokenization, leading to inefficiency in token allocation. In this
study, we introduce One-D-Pi... | 2025-01-17T09:29:33Z | Our Project Page:
https://turingmotors.github.io/one-d-piece-tokenizer | null | null | null | null | null | null | null | null | null |
2,501.10105 | Universal Actions for Enhanced Embodied Foundation Models | ['Jinliang Zheng', 'Jianxiong Li', 'Dongxiu Liu', 'Yinan Zheng', 'Zhihao Wang', 'Zhonghong Ou', 'Yu Liu', 'Jingjing Liu', 'Ya-Qin Zhang', 'Xianyuan Zhan'] | ['cs.RO', 'cs.AI', 'cs.CV'] | Training on diverse, internet-scale data is a key factor in the success of
recent large foundation models. Yet, using the same recipe for building
embodied agents has faced noticeable difficulties. Despite the availability of
many crowd-sourced embodied datasets, their action spaces often exhibit
significant heterogene... | 2025-01-17T10:45:22Z | CVPR 2025 | null | null | null | null | null | null | null | null | null |
2,501.1012 | PaSa: An LLM Agent for Comprehensive Academic Paper Search | ['Yichen He', 'Guanhua Huang', 'Peiyuan Feng', 'Yuan Lin', 'Yuchen Zhang', 'Hang Li', 'Weinan E'] | ['cs.IR', 'cs.LG'] | We introduce PaSa, an advanced Paper Search agent powered by large language
models. PaSa can autonomously make a series of decisions, including invoking
search tools, reading papers, and selecting relevant references, to ultimately
obtain comprehensive and accurate results for complex scholar queries. We
optimize PaSa ... | 2025-01-17T11:12:28Z | null | null | null | PaSa: An LLM Agent for Comprehensive Academic Paper Search | ['Yichen He', 'Guanhua Huang', 'Peiyuan Feng', 'Yuan Lin', 'Yuchen Zhang', 'Hang Li', 'E. Weinan'] | 2,025 | arXiv.org | 11 | 35 | ['Computer Science'] |
2,501.10322 | Hierarchical Autoregressive Transformers: Combining Byte- and Word-Level
Processing for Robust, Adaptable Language Models | ['Pit Neitemeier', 'Björn Deiseroth', 'Constantin Eichenberg', 'Lukas Balles'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Tokenization is a fundamental step in natural language processing, breaking
text into units that computational models can process. While learned subword
tokenizers have become the de-facto standard, they present challenges such as
large vocabularies, limited adaptability to new domains or languages, and
sensitivity to ... | 2025-01-17T17:51:53Z | null | null | null | null | null | null | null | null | null | null |
2,501.10648 | DNA 1.0 Technical Report | ['Jungyup Lee', 'Jemin Kim', 'Sang Park', 'SeungJae Lee'] | ['cs.CL'] | In this report, we present DNA 1.0 8B Instruct, a state-of-the-art bilingual
language model optimized for Korean and English language tasks. By applying
continual pre-training (CPT) with high-quality Korean datasets to Llama 3.1 8B
and subsequent supervised fine-tuning (SFT), we create an instruction-following
model wi... | 2025-01-18T03:48:56Z | null | null | null | DNA 1.0 Technical Report | ['Jungyup Lee', 'Jemin Kim', 'Sang Park', 'SeungJae Lee'] | 2,025 | arXiv.org | 0 | 0 | ['Computer Science'] |
2,501.10979 | Control LLM: Controlled Evolution for Intelligence Retention in LLM | ['Haichao Wei', 'Yunxiang Ren', 'Zhoutong Fu', 'Aman Lunia', 'Yi-Lin Chen', 'Alice Leung', 'Ya Xu'] | ['cs.LG'] | Large Language Models (LLMs) demand significant computational resources,
making it essential to enhance their capabilities without retraining from
scratch. A key challenge in this domain is \textit{catastrophic forgetting}
(CF), which hampers performance during Continuous Pre-training (CPT) and
Continuous Supervised Fi... | 2025-01-19T08:06:06Z | 8 pages | null | null | null | null | null | null | null | null | null |
2,501.1112 | Tell me about yourself: LLMs are aware of their learned behaviors | ['Jan Betley', 'Xuchan Bao', 'Martín Soto', 'Anna Sztyber-Betley', 'James Chua', 'Owain Evans'] | ['cs.CL', 'cs.AI', 'cs.CR', 'cs.LG'] | We study behavioral self-awareness -- an LLM's ability to articulate its
behaviors without requiring in-context examples. We finetune LLMs on datasets
that exhibit particular behaviors, such as (a) making high-risk economic
decisions, and (b) outputting insecure code. Despite the datasets containing no
explicit descrip... | 2025-01-19T17:28:12Z | Submitted to ICLR 2025. 17 pages, 13 figures | null | null | null | null | null | null | null | null | null |
2,501.11561 | Teaching Large Language Models to Regress Accurate Image Quality Scores
using Score Distribution | ['Zhiyuan You', 'Xin Cai', 'Jinjin Gu', 'Tianfan Xue', 'Chao Dong'] | ['cs.CV'] | With the rapid advancement of Multi-modal Large Language Models (MLLMs),
MLLM-based Image Quality Assessment (IQA) methods have shown promising
performance in linguistic quality description. However, current methods still
fall short in accurately scoring image quality. In this work, we aim to
leverage MLLMs to regress ... | 2025-01-20T16:04:57Z | Accepted by CVPR 2025 | null | null | Teaching Large Language Models to Regress Accurate Image Quality Scores using Score Distribution | ['Zhiyuan You', 'Xin Cai', 'Jinjin Gu', 'Tianfan Xue', 'Chao Dong'] | 2,025 | arXiv.org | 14 | 95 | ['Computer Science'] |
2,501.11587 | Recurrent Diffusion for Large-Scale Parameter Generation | ['Kai Wang', 'Dongwen Tang', 'Wangbo Zhao', 'Konstantin Schürholt', 'Zhangyang Wang', 'Yang You'] | ['cs.LG', 'cs.AI'] | Parameter generation has long struggled to match the scale of today large
vision and language models, curbing its broader utility. In this paper, we
introduce Recurrent Diffusion for Large Scale Parameter Generation (RPG), a
novel framework that generates full neural network parameters up to hundreds of
millions on a s... | 2025-01-20T16:46:26Z | Generating 200 million parameters in just minutes | null | null | Recurrent Diffusion for Large-Scale Parameter Generation | ['Kai Wang', 'Dongwen Tang', 'Wangbo Zhao', 'Yang You'] | 2,025 | arXiv.org | 5 | 76 | ['Computer Science'] |
2,501.12079 | Directional Diffusion-Style Code Editing Pre-training | ['Qingyuan Liang', 'Zeyu Sun', 'Qihao Zhu', 'Junhao Hu', 'Yifan Zhao', 'Yizhou Chen', 'Mingxuan Zhu', 'Guoqing Wang', 'Lu Zhang'] | ['cs.SE'] | Code pre-trained models have shown promising effectiveness in various
software engineering tasks. Among these tasks, many tasks are related to
software evolution and/or code editing. However, existing code pre-trained
models often overlook the real-world code editing data and the evolutionary
nature of the editing proc... | 2025-01-21T12:10:18Z | null | null | null | null | null | null | null | null | null | null |
2,501.12202 | Hunyuan3D 2.0: Scaling Diffusion Models for High Resolution Textured 3D
Assets Generation | ['Zibo Zhao', 'Zeqiang Lai', 'Qingxiang Lin', 'Yunfei Zhao', 'Haolin Liu', 'Shuhui Yang', 'Yifei Feng', 'Mingxin Yang', 'Sheng Zhang', 'Xianghui Yang', 'Huiwen Shi', 'Sicong Liu', 'Junta Wu', 'Yihang Lian', 'Fan Yang', 'Ruining Tang', 'Zebin He', 'Xinzhou Wang', 'Jian Liu', 'Xuhui Zuo', 'Zhuo Chen', 'Biwen Lei', 'Haoha... | ['cs.CV'] | We present Hunyuan3D 2.0, an advanced large-scale 3D synthesis system for
generating high-resolution textured 3D assets. This system includes two
foundation components: a large-scale shape generation model -- Hunyuan3D-DiT,
and a large-scale texture synthesis model -- Hunyuan3D-Paint. The shape
generative model, built ... | 2025-01-21T15:16:54Z | GitHub link: https://github.com/Tencent/Hunyuan3D-2 | null | null | null | null | null | null | null | null | null |
2,501.12326 | UI-TARS: Pioneering Automated GUI Interaction with Native Agents | ['Yujia Qin', 'Yining Ye', 'Junjie Fang', 'Haoming Wang', 'Shihao Liang', 'Shizuo Tian', 'Junda Zhang', 'Jiahao Li', 'Yunxin Li', 'Shijue Huang', 'Wanjun Zhong', 'Kuanye Li', 'Jiale Yang', 'Yu Miao', 'Woyu Lin', 'Longxiang Liu', 'Xu Jiang', 'Qianli Ma', 'Jingyu Li', 'Xiaojun Xiao', 'Kai Cai', 'Chuang Li', 'Yaowei Zheng... | ['cs.AI', 'cs.CL', 'cs.CV', 'cs.HC'] | This paper introduces UI-TARS, a native GUI agent model that solely perceives
the screenshots as input and performs human-like interactions (e.g., keyboard
and mouse operations). Unlike prevailing agent frameworks that depend on
heavily wrapped commercial models (e.g., GPT-4o) with expert-crafted prompts
and workflows,... | 2025-01-21T17:48:10Z | null | null | null | UI-TARS: Pioneering Automated GUI Interaction with Native Agents | ['Yujia Qin', 'Yining Ye', 'Junjie Fang', 'Haoming Wang', 'Shihao Liang', 'Shizuo Tian', 'Junda Zhang', 'Jiahao Li', 'Yunxin Li', 'Shijue Huang', 'Wanjun Zhong', 'Kuanye Li', 'Jiale Yang', 'Yu Miao', 'Woyu Lin', 'Longxiang Liu', 'Xu Jiang', 'Qianli Ma', 'Jingyu Li', 'Xiaojun Xiao', 'Kai Cai', 'Chuang Li', 'Yaowei Zheng... | 2,025 | arXiv.org | 69 | 0 | ['Computer Science'] |
2,501.12327 | VARGPT: Unified Understanding and Generation in a Visual Autoregressive
Multimodal Large Language Model | ['Xianwei Zhuang', 'Yuxin Xie', 'Yufan Deng', 'Liming Liang', 'Jinghan Ru', 'Yuguo Yin', 'Yuexian Zou'] | ['cs.CV'] | We present VARGPT, a novel multimodal large language model (MLLM) that
unifies visual understanding and generation within a single autoregressive
framework. VARGPT employs a next-token prediction paradigm for visual
understanding and a next-scale prediction paradigm for visual autoregressive
generation. VARGPT innovati... | 2025-01-21T17:50:43Z | null | null | null | null | null | null | null | null | null | null |
2,501.12368 | InternLM-XComposer2.5-Reward: A Simple Yet Effective Multi-Modal Reward
Model | ['Yuhang Zang', 'Xiaoyi Dong', 'Pan Zhang', 'Yuhang Cao', 'Ziyu Liu', 'Shengyuan Ding', 'Shenxi Wu', 'Yubo Ma', 'Haodong Duan', 'Wenwei Zhang', 'Kai Chen', 'Dahua Lin', 'Jiaqi Wang'] | ['cs.CV', 'cs.CL'] | Despite the promising performance of Large Vision Language Models (LVLMs) in
visual understanding, they occasionally generate incorrect outputs. While
reward models (RMs) with reinforcement learning or test-time scaling offer the
potential for improving generation quality, a critical gap remains: publicly
available mul... | 2025-01-21T18:47:32Z | ACL 2025 Findings | null | null | null | null | null | null | null | null | null |
2,501.12375 | Video Depth Anything: Consistent Depth Estimation for Super-Long Videos | ['Sili Chen', 'Hengkai Guo', 'Shengnan Zhu', 'Feihu Zhang', 'Zilong Huang', 'Jiashi Feng', 'Bingyi Kang'] | ['cs.CV', 'cs.AI'] | Depth Anything has achieved remarkable success in monocular depth estimation
with strong generalization ability. However, it suffers from temporal
inconsistency in videos, hindering its practical applications. Various methods
have been proposed to alleviate this issue by leveraging video generation
models or introducin... | 2025-01-21T18:53:30Z | Project page: https://videodepthanything.github.io/ | null | null | null | null | null | null | null | null | null |
2,501.12386 | InternVideo2.5: Empowering Video MLLMs with Long and Rich Context
Modeling | ['Yi Wang', 'Xinhao Li', 'Ziang Yan', 'Yinan He', 'Jiashuo Yu', 'Xiangyu Zeng', 'Chenting Wang', 'Changlian Ma', 'Haian Huang', 'Jianfei Gao', 'Min Dou', 'Kai Chen', 'Wenhai Wang', 'Yu Qiao', 'Yali Wang', 'Limin Wang'] | ['cs.CV'] | This paper aims to improve the performance of video multimodal large language
models (MLLM) via long and rich context (LRC) modeling. As a result, we develop
a new version of InternVideo2.5 with a focus on enhancing the original MLLMs'
ability to perceive fine-grained details and capture long-form temporal
structure in... | 2025-01-21T18:59:00Z | technical report | null | null | InternVideo2.5: Empowering Video MLLMs with Long and Rich Context Modeling | ['Yi Wang', 'Xinhao Li', 'Ziang Yan', 'Yinan He', 'Jiashuo Yu', 'Xiangyun Zeng', 'Chenting Wang', 'Changlian Ma', 'Haian Huang', 'Jianfei Gao', 'Min Dou', 'Kaiming Chen', 'Wenhai Wang', 'Yu Qiao', 'Yali Wang', 'Limin Wang'] | 2,025 | arXiv.org | 51 | 104 | ['Computer Science'] |
2,501.12432 | Divide-Then-Aggregate: An Efficient Tool Learning Method via Parallel
Tool Invocation | ['Dongsheng Zhu', 'Weixian Shi', 'Zhengliang Shi', 'Zhaochun Ren', 'Shuaiqiang Wang', 'Lingyong Yan', 'Dawei Yin'] | ['cs.LG', 'cs.AI', 'cs.CL'] | Although current Large Language Models (LLMs) exhibit impressive
capabilities, performing complex real-world tasks still requires tool learning.
Mainstream methods, such as CoT/ReAct, rely on step-by-step tool invocation to
interact with external environments, but they are limited in perceptual scope
and lack adequate ... | 2025-01-21T16:49:08Z | Accepted to ACL 2025 | null | null | null | null | null | null | null | null | null |
2,501.12486 | The Journey Matters: Average Parameter Count over Pre-training Unifies
Sparse and Dense Scaling Laws | ['Tian Jin', 'Ahmed Imtiaz Humayun', 'Utku Evci', 'Suvinay Subramanian', 'Amir Yazdanbakhsh', 'Dan Alistarh', 'Gintare Karolina Dziugaite'] | ['cs.LG', 'cs.CL'] | Pruning eliminates unnecessary parameters in neural networks; it offers a
promising solution to the growing computational demands of large language
models (LLMs). While many focus on post-training pruning, sparse
pre-training--which combines pruning and pre-training into a single
phase--provides a simpler alternative. ... | 2025-01-21T20:23:22Z | 17 pages | null | null | null | null | null | null | null | null | null |
2,501.12766 | NExtLong: Toward Effective Long-Context Training without Long Documents | ['Chaochen Gao', 'Xing Wu', 'Zijia Lin', 'Debing Zhang', 'Songlin Hu'] | ['cs.CL', 'cs.AI'] | Large language models (LLMs) with extended context windows have made
significant strides yet remain a challenge due to the scarcity of long
documents. Existing methods tend to synthesize long-context data but lack a
clear mechanism to reinforce the long-range dependency modeling. To address
this limitation, we propose ... | 2025-01-22T10:01:54Z | Accepted by ICML 2025. Corresponding authors: xing wu, and songlin hu | null | null | NExtLong: Toward Effective Long-Context Training without Long Documents | ['Chaochen Gao', 'Xing Wu', 'Zijia Lin', 'Debing Zhang', 'Songlin Hu'] | 2,025 | arXiv.org | 2 | 99 | ['Computer Science'] |
2,501.1291 | PreciseCam: Precise Camera Control for Text-to-Image Generation | ['Edurne Bernal-Berdun', 'Ana Serrano', 'Belen Masia', 'Matheus Gadelha', 'Yannick Hold-Geoffroy', 'Xin Sun', 'Diego Gutierrez'] | ['cs.CV', 'cs.AI', 'cs.LG'] | Images as an artistic medium often rely on specific camera angles and lens
distortions to convey ideas or emotions; however, such precise control is
missing in current text-to-image models. We propose an efficient and general
solution that allows precise control over the camera when generating both
photographic and art... | 2025-01-22T14:37:01Z | null | null | null | PreciseCam: Precise Camera Control for Text-to-Image Generation | ['Edurne Bernal-Berdun', 'Ana Serrano', 'B. Masiá', 'Matheus Gadelha', 'Yannick Hold-Geoffroy', 'Xin Sun', 'Diego Gutierrez'] | 2,025 | Computer Vision and Pattern Recognition | 1 | 52 | ['Computer Science'] |
2,501.12948 | DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via
Reinforcement Learning | ['DeepSeek-AI', 'Daya Guo', 'Dejian Yang', 'Haowei Zhang', 'Junxiao Song', 'Ruoyu Zhang', 'Runxin Xu', 'Qihao Zhu', 'Shirong Ma', 'Peiyi Wang', 'Xiao Bi', 'Xiaokang Zhang', 'Xingkai Yu', 'Yu Wu', 'Z. F. Wu', 'Zhibin Gou', 'Zhihong Shao', 'Zhuoshu Li', 'Ziyi Gao', 'Aixin Liu', 'Bing Xue', 'Bingxuan Wang', 'Bochao Wu', '... | ['cs.CL', 'cs.AI', 'cs.LG'] | We introduce our first-generation reasoning models, DeepSeek-R1-Zero and
DeepSeek-R1. DeepSeek-R1-Zero, a model trained via large-scale reinforcement
learning (RL) without supervised fine-tuning (SFT) as a preliminary step,
demonstrates remarkable reasoning capabilities. Through RL, DeepSeek-R1-Zero
naturally emerges w... | 2025-01-22T15:19:35Z | null | null | null | DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning | ['DeepSeek-AI', 'Daya Guo', 'Dejian Yang', 'Haowei Zhang', 'Jun-Mei Song', 'Ruoyu Zhang', 'R. Xu', 'Qihao Zhu', 'Shirong Ma', 'Peiyi Wang', 'Xiaoling Bi', 'Xiaokang Zhang', 'Xingkai Yu', 'Yu Wu', 'Z. F. Wu', 'Zhibin Gou', 'Zhihong Shao', 'Zhuoshu Li', 'Ziyi Gao', 'A. Liu', 'Bing Xue', 'Bing-Li Wang', 'Bochao Wu', 'Bei ... | 2,025 | arXiv.org | 2,033 | 33 | ['Computer Science'] |
2,501.12979 | FlanEC: Exploring Flan-T5 for Post-ASR Error Correction | ['Moreno La Quatra', 'Valerio Mario Salerno', 'Yu Tsao', 'Sabato Marco Siniscalchi'] | ['cs.CL', 'cs.AI', 'cs.SD', 'eess.AS'] | In this paper, we present an encoder-decoder model leveraging Flan-T5 for
post-Automatic Speech Recognition (ASR) Generative Speech Error Correction
(GenSEC), and we refer to it as FlanEC. We explore its application within the
GenSEC framework to enhance ASR outputs by mapping n-best hypotheses into a
single output sen... | 2025-01-22T16:06:04Z | Accepted at the 2024 IEEE Workshop on Spoken Language Technology
(SLT) - GenSEC Challenge | 2024 IEEE Spoken Language Technology Workshop (SLT), Macao, 2024,
pp. 608-615 | 10.1109/SLT61566.2024.10832257 | null | null | null | null | null | null | null |
2,501.13007 | PairJudge RM: Perform Best-of-N Sampling with Knockout Tournament | ['Yantao Liu', 'Zijun Yao', 'Rui Min', 'Yixin Cao', 'Lei Hou', 'Juanzi Li'] | ['cs.CL'] | Best-of-N (BoN) sampling, a common strategy for test-time scaling of Large
Language Models (LLMs), relies on reward models to select the best candidate
solution from multiple generations. However, traditional reward models often
assign arbitrary and inconsistent scores, limiting their effectiveness. To
address this, we... | 2025-01-22T16:49:37Z | in progress work | null | null | null | null | null | null | null | null | null |
2,501.13106 | VideoLLaMA 3: Frontier Multimodal Foundation Models for Image and Video
Understanding | ['Boqiang Zhang', 'Kehan Li', 'Zesen Cheng', 'Zhiqiang Hu', 'Yuqian Yuan', 'Guanzheng Chen', 'Sicong Leng', 'Yuming Jiang', 'Hang Zhang', 'Xin Li', 'Peng Jin', 'Wenqi Zhang', 'Fan Wang', 'Lidong Bing', 'Deli Zhao'] | ['cs.CV'] | In this paper, we propose VideoLLaMA3, a more advanced multimodal foundation
model for image and video understanding. The core design philosophy of
VideoLLaMA3 is vision-centric. The meaning of "vision-centric" is two-fold: the
vision-centric training paradigm and vision-centric framework design. The key
insight of our... | 2025-01-22T18:59:46Z | BZ, KL, ZC, ZH, YY, GC, SL, YJ, HZ, and XL contributed equally to
this project. Code: https://github.com/DAMO-NLP-SG/VideoLLaMA3 | null | null | null | null | null | null | null | null | null |
2,501.13306 | OSUM: Advancing Open Speech Understanding Models with Limited Resources
in Academia | ['Xuelong Geng', 'Kun Wei', 'Qijie Shao', 'Shuiyun Liu', 'Zhennan Lin', 'Zhixian Zhao', 'Guojian Li', 'Wenjie Tian', 'Peikun Chen', 'Yangze Li', 'Pengcheng Guo', 'Mingchen Shao', 'Shuiyuan Wang', 'Yuang Cao', 'Chengyou Wang', 'Tianyi Xu', 'Yuhang Dai', 'Xinfa Zhu', 'Yue Li', 'Li Zhang', 'Lei Xie'] | ['cs.SD', 'cs.CL', 'eess.AS'] | Large Language Models (LLMs) have made significant progress in various
downstream tasks, inspiring the development of Speech Understanding Language
Models (SULMs) to enable comprehensive speech-based interactions. However, most
advanced SULMs are developed by the industry, leveraging large-scale datasets
and computatio... | 2025-01-23T01:27:46Z | OSUM Technical Report v2. The experimental results reported herein
differ from those in v1 because of adding new data and training in more steps | null | null | OSUM: Advancing Open Speech Understanding Models with Limited Resources in Academia | ['Xuelong Geng', 'Kun Wei', 'Qijie Shao', 'Shuiyun Liu', 'Zhennan Lin', 'Zhixian Zhao', 'Guojian Li', 'Wenjie Tian', 'Peikun Chen', 'Yangze Li', 'Pengcheng Guo', 'Mingchen Shao', 'Shuiyuan Wang', 'Yuang Cao', 'Chengyou Wang', 'Tianyi Xu', 'Yuhang Dai', 'Xinfa Zhu', 'Yue Li', 'Li Zhang', 'Lei Xie'] | 2,025 | arXiv.org | 5 | 32 | ['Computer Science'] |
2,501.13432 | Emotion estimation from video footage with LSTM | ['Samer Attrah'] | ['cs.CV', 'cs.LG', 'cs.RO', '68T45 (primary) 68T07, 68T40 (secondary)', 'I.4.8; J.4; I.2.9'] | Emotion estimation in general is a field that has been studied for a long
time, and several approaches exist using machine learning. in this paper, we
present an LSTM model, that processes the blend-shapes produced by the library
MediaPipe, for a face detected in a live stream of a camera, to estimate the
main emotion ... | 2025-01-23T07:35:47Z | 12 pages, 5 figures, 34 references, 4 tables, 3 equations | null | null | null | null | null | null | null | null | null |
2,501.13452 | EchoVideo: Identity-Preserving Human Video Generation by Multimodal
Feature Fusion | ['Jiangchuan Wei', 'Shiyue Yan', 'Wenfeng Lin', 'Boyuan Liu', 'Renjie Chen', 'Mingyu Guo'] | ['cs.CV'] | Recent advancements in video generation have significantly impacted various
downstream applications, particularly in identity-preserving video generation
(IPT2V). However, existing methods struggle with "copy-paste" artifacts and low
similarity issues, primarily due to their reliance on low-level facial image
informati... | 2025-01-23T08:06:11Z | null | null | null | null | null | null | null | null | null | null |
2,501.13492 | Quantized Spike-driven Transformer | ['Xuerui Qiu', 'Malu Zhang', 'Jieyuan Zhang', 'Wenjie Wei', 'Honglin Cao', 'Junsheng Guo', 'Rui-Jie Zhu', 'Yimeng Shan', 'Yang Yang', 'Haizhou Li'] | ['cs.CV'] | Spiking neural networks are emerging as a promising energy-efficient
alternative to traditional artificial neural networks due to their spike-driven
paradigm. However, recent research in the SNN domain has mainly focused on
enhancing accuracy by designing large-scale Transformer structures, which
typically rely on subs... | 2025-01-23T09:14:15Z | Accepted by ICLR 2025 | null | null | null | null | null | null | null | null | null |
2,501.13567 | K-COMP: Retrieval-Augmented Medical Domain Question Answering With
Knowledge-Injected Compressor | ['Jeonghun Cho', 'Gary Geunbae Lee'] | ['cs.CL', 'cs.AI'] | Retrieval-augmented question answering (QA) integrates external information
and thereby increases the QA accuracy of reader models that lack domain
knowledge. However, documents retrieved for closed domains require high
expertise, so the reader model may have difficulty fully comprehending the
text. Moreover, the retri... | 2025-01-23T11:14:21Z | Accepted at NAACL 2025 (Main, long paper) | null | null | null | null | null | null | null | null | null |
2,501.13687 | Question Answering on Patient Medical Records with Private Fine-Tuned
LLMs | ['Sara Kothari', 'Ayush Gupta'] | ['cs.CL', 'cs.AI'] | Healthcare systems continuously generate vast amounts of electronic health
records (EHRs), commonly stored in the Fast Healthcare Interoperability
Resources (FHIR) standard. Despite the wealth of information in these records,
their complexity and volume make it difficult for users to retrieve and
interpret crucial heal... | 2025-01-23T14:13:56Z | null | null | null | null | null | null | null | null | null | null |
2,501.13918 | Improving Video Generation with Human Feedback | ['Jie Liu', 'Gongye Liu', 'Jiajun Liang', 'Ziyang Yuan', 'Xiaokun Liu', 'Mingwu Zheng', 'Xiele Wu', 'Qiulin Wang', 'Wenyu Qin', 'Menghan Xia', 'Xintao Wang', 'Xiaohong Liu', 'Fei Yang', 'Pengfei Wan', 'Di Zhang', 'Kun Gai', 'Yujiu Yang', 'Wanli Ouyang'] | ['cs.CV', 'cs.AI', 'cs.GR', 'cs.LG'] | Video generation has achieved significant advances through rectified flow
techniques, but issues like unsmooth motion and misalignment between videos and
prompts persist. In this work, we develop a systematic pipeline that harnesses
human feedback to mitigate these problems and refine the video generation
model. Specif... | 2025-01-23T18:55:41Z | null | null | null | Improving Video Generation with Human Feedback | ['Jie Liu', 'Gongye Liu', 'Jiajun Liang', 'Ziyang Yuan', 'Xiaokun Liu', 'Mingwu Zheng', 'Xiele Wu', 'Qiulin Wang', 'Wenyu Qin', 'Menghan Xia', 'Xintao Wang', 'Xiaohong Liu', 'Fei Yang', 'Pengfei Wan', 'Di Zhang', 'Kun Gai', 'Yujiu Yang', 'Wanli Ouyang'] | 2,025 | arXiv.org | 26 | 78 | ['Computer Science'] |
2,501.13919 | Temporal Preference Optimization for Long-Form Video Understanding | ['Rui Li', 'Xiaohan Wang', 'Yuhui Zhang', 'Zeyu Wang', 'Serena Yeung-Levy'] | ['cs.CV', 'cs.AI', 'cs.CL', 'cs.LG', 'cs.RO'] | Despite significant advancements in video large multimodal models
(video-LMMs), achieving effective temporal grounding in long-form videos
remains a challenge for existing models. To address this limitation, we propose
Temporal Preference Optimization (TPO), a novel post-training framework
designed to enhance the tempo... | 2025-01-23T18:58:03Z | null | null | null | Temporal Preference Optimization for Long-Form Video Understanding | ['Rui Li', 'Xiaohan Wang', 'Yuhui Zhang', 'Zeyu Wang', 'S. Yeung-Levy'] | 2,025 | arXiv.org | 15 | 69 | ['Computer Science'] |
2,501.13921 | The Breeze 2 Herd of Models: Traditional Chinese LLMs Based on Llama
with Vision-Aware and Function-Calling Capabilities | ['MediaTek Research', ':', 'Chan-Jan Hsu', 'Chia-Sheng Liu', 'Meng-Hsi Chen', 'Muxi Chen', 'Po-Chun Hsu', 'Yi-Chang Chen', 'Da-Shan Shiu'] | ['cs.CL'] | Llama-Breeze2 (hereinafter referred to as Breeze2) is a suite of advanced
multi-modal language models, available in 3B and 8B parameter configurations,
specifically designed to enhance Traditional Chinese language representation.
Building upon the Llama 3.2 model family, we continue the pre-training of
Breeze2 on an ex... | 2025-01-23T18:59:02Z | null | null | null | null | null | null | null | null | null | null |
2,501.13925 | GeoPixel: Pixel Grounding Large Multimodal Model in Remote Sensing | ['Akashah Shabbir', 'Mohammed Zumri', 'Mohammed Bennamoun', 'Fahad S. Khan', 'Salman Khan'] | ['cs.CV'] | Recent advances in large multimodal models (LMMs) have recognized
fine-grained grounding as an imperative factor of visual understanding and
dialogue. However, the benefits of such representation in LMMs are limited to
the natural image domain, and these models perform poorly for remote sensing
(RS). The distinct overh... | 2025-01-23T18:59:30Z | null | null | null | GeoPixel: Pixel Grounding Large Multimodal Model in Remote Sensing | ['Akashah Shabbir', 'Mohammed Zumri', 'Mohammed Bennamoun', 'F. Khan', 'Salman Khan'] | 2,025 | arXiv.org | 9 | 0 | ['Computer Science'] |
2,501.13928 | Fast3R: Towards 3D Reconstruction of 1000+ Images in One Forward Pass | ['Jianing Yang', 'Alexander Sax', 'Kevin J. Liang', 'Mikael Henaff', 'Hao Tang', 'Ang Cao', 'Joyce Chai', 'Franziska Meier', 'Matt Feiszli'] | ['cs.CV', 'cs.AI', 'cs.GR', 'cs.RO'] | Multi-view 3D reconstruction remains a core challenge in computer vision,
particularly in applications requiring accurate and scalable representations
across diverse perspectives. Current leading methods such as DUSt3R employ a
fundamentally pairwise approach, processing images in pairs and necessitating
costly global ... | 2025-01-23T18:59:55Z | CVPR 2025. Project website: https://fast3r-3d.github.io/ | null | null | Fast3R: Towards 3D Reconstruction of 1000+ Images in One Forward Pass | ['Jianing Yang', 'Alexander Sax', 'Kevin J. Liang', 'Mikael Henaff', 'Hao Tang', 'Ang Cao', 'Joyce Chai', 'Franziska Meier', 'Matt Feiszli'] | 2,025 | arXiv.org | 31 | 70 | ['Computer Science'] |
2,501.13944 | Fanar: An Arabic-Centric Multimodal Generative AI Platform | ['Fanar Team', 'Ummar Abbas', 'Mohammad Shahmeer Ahmad', 'Firoj Alam', 'Enes Altinisik', 'Ehsannedin Asgari', 'Yazan Boshmaf', 'Sabri Boughorbel', 'Sanjay Chawla', 'Shammur Chowdhury', 'Fahim Dalvi', 'Kareem Darwish', 'Nadir Durrani', 'Mohamed Elfeky', 'Ahmed Elmagarmid', 'Mohamed Eltabakh', 'Masoomali Fatehkia', 'Anas... | ['cs.CL', 'cs.AI', 'I.2.0; D.2.0'] | We present Fanar, a platform for Arabic-centric multimodal generative AI
systems, that supports language, speech and image generation tasks. At the
heart of Fanar are Fanar Star and Fanar Prime, two highly capable Arabic Large
Language Models (LLMs) that are best in the class on well established
benchmarks for similar ... | 2025-01-18T05:35:32Z | null | null | null | Fanar: An Arabic-Centric Multimodal Generative AI Platform | ['Fanar Team Ummar Abbas', 'M. S. Ahmad', 'Firoj Alam', 'Enes Altinisik', 'Ehsannedin Asgari', 'Yazan Boshmaf', 'Sabri Boughorbel', 'Sanjay Chawla', 'Shammur A. Chowdhury', 'Fahim Dalvi', 'Kareem Darwish', 'Nadir Durrani', 'M. Elfeky', 'A. Elmagarmid', 'M. Eltabakh', 'Masoomali Fatehkia', 'Anastasios Fragkopoulos', 'Ma... | 2,025 | arXiv.org | 17 | 0 | ['Computer Science'] |
2,501.13959 | Learning an Effective Premise Retrieval Model for Efficient Mathematical
Formalization | ['Yicheng Tao', 'Haotian Liu', 'Shanwen Wang', 'Hongteng Xu'] | ['cs.CL', 'cs.AI', 'cs.IR'] | Formalized mathematics has recently garnered significant attention for its
ability to assist mathematicians across various fields. Premise retrieval, as a
common step in mathematical formalization, has been a challenge, particularly
for inexperienced users. Existing retrieval methods that facilitate natural
language qu... | 2025-01-21T06:32:25Z | null | null | null | null | null | null | null | null | null | null |
2,501.14208 | You Only Teach Once: Learn One-Shot Bimanual Robotic Manipulation from
Video Demonstrations | ['Huayi Zhou', 'Ruixiang Wang', 'Yunxin Tai', 'Yueci Deng', 'Guiliang Liu', 'Kui Jia'] | ['cs.RO', 'cs.CV'] | Bimanual robotic manipulation is a long-standing challenge of embodied
intelligence due to its characteristics of dual-arm spatial-temporal
coordination and high-dimensional action spaces. Previous studies rely on
pre-defined action taxonomies or direct teleoperation to alleviate or
circumvent these issues, often makin... | 2025-01-24T03:26:41Z | accepted by RSS 2025 | null | null | null | null | null | null | null | null | null |
2,501.14342 | Chain-of-Retrieval Augmented Generation | ['Liang Wang', 'Haonan Chen', 'Nan Yang', 'Xiaolong Huang', 'Zhicheng Dou', 'Furu Wei'] | ['cs.IR', 'cs.CL'] | This paper introduces an approach for training o1-like RAG models that
retrieve and reason over relevant information step by step before generating
the final answer. Conventional RAG methods usually perform a single retrieval
step before the generation process, which limits their effectiveness in
addressing complex que... | 2025-01-24T09:12:52Z | 18 pages | null | null | null | null | null | null | null | null | null |
2,501.1435 | FireRedASR: Open-Source Industrial-Grade Mandarin Speech Recognition
Models from Encoder-Decoder to LLM Integration | ['Kai-Tuo Xu', 'Feng-Long Xie', 'Xu Tang', 'Yao Hu'] | ['eess.AS', 'cs.SD'] | We present FireRedASR, a family of large-scale automatic speech recognition
(ASR) models for Mandarin, designed to meet diverse requirements in superior
performance and optimal efficiency across various applications. FireRedASR
comprises two variants:
FireRedASR-LLM: Designed to achieve state-of-the-art (SOTA) perfor... | 2025-01-24T09:21:41Z | null | null | null | FireRedASR: Open-Source Industrial-Grade Mandarin Speech Recognition Models from Encoder-Decoder to LLM Integration | ['Kai-Tuo Xu', 'Feng-Long Xie', 'Xu Tang', 'Yao Hu'] | 2,025 | arXiv.org | 5 | 38 | ['Engineering', 'Computer Science'] |
2,501.14431 | Domaino1s: Guiding LLM Reasoning for Explainable Answers in High-Stakes
Domains | ['Xu Chu', 'Zhijie Tan', 'Hanlin Xue', 'Guanyu Wang', 'Tong Mo', 'Weiping Li'] | ['cs.CL', 'cs.LG'] | Large Language Models (LLMs) are widely applied to downstream domains.
However, current LLMs for high-stakes domain tasks, such as financial
investment and legal QA, typically generate brief answers without reasoning
processes and explanations. This limits users' confidence in making decisions
based on their responses.... | 2025-01-24T11:57:39Z | null | null | null | Domaino1s: Guiding LLM Reasoning for Explainable Answers in High-Stakes Domains | ['Xu Chu', 'Zhijie Tan', 'Hanlin Xue', 'Guanyu Wang', 'Tong Mo', 'Weiping Li'] | 2,025 | arXiv.org | 3 | 72 | ['Computer Science'] |
2,501.14607 | ReferDINO: Referring Video Object Segmentation with Visual Grounding
Foundations | ['Tianming Liang', 'Kun-Yu Lin', 'Chaolei Tan', 'Jianguo Zhang', 'Wei-Shi Zheng', 'Jian-Fang Hu'] | ['cs.CV'] | Referring video object segmentation (RVOS) aims to segment target objects
throughout a video based on a text description. This is challenging as it
involves deep vision-language understanding, pixel-level dense prediction and
spatiotemporal reasoning. Despite notable progress in recent years, existing
methods still exh... | 2025-01-24T16:24:15Z | Accepted to ICCV 2025. Project page:
\url{https://isee-laboratory.github.io/ReferDINO} | null | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.