arxiv_id float64 1.5k 2.51k | title stringlengths 9 178 ⌀ | authors stringlengths 2 22.8k | categories stringlengths 4 146 | summary stringlengths 103 1.92k ⌀ | published stringdate 2015-02-06 10:44:00 2025-07-10 17:59:58 ⌀ | comments stringlengths 2 417 ⌀ | journal_ref stringclasses 321
values | doi stringclasses 398
values | ss_title stringlengths 8 159 ⌀ | ss_authors stringlengths 11 8.38k ⌀ | ss_year float64 2.02k 2.03k ⌀ | ss_venue stringclasses 281
values | ss_citationCount float64 0 134k ⌀ | ss_referenceCount float64 0 429 ⌀ | ss_fieldsOfStudy stringclasses 47
values |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2,411.08034 | Scaling Properties of Diffusion Models for Perceptual Tasks | ['Rahul Ravishankar', 'Zeeshan Patel', 'Jathushan Rajasegaran', 'Jitendra Malik'] | ['cs.CV', 'cs.AI'] | In this paper, we argue that iterative computation with diffusion models
offers a powerful paradigm for not only generation but also visual perception
tasks. We unify tasks such as depth estimation, optical flow, and amodal
segmentation under the framework of image-to-image translation, and show how
diffusion models be... | 2024-11-12T18:59:35Z | null | null | null | null | null | null | null | null | null | null |
2,411.08127 | TIPO: Text to Image with Text Presampling for Prompt Optimization | ['Shih-Ying Yeh', 'Sang-Hyun Park', 'Yi Li', 'Giyeong Oh', 'Xuehai Wang', 'Min Song', 'Youngjae Yu'] | ['cs.CV'] | TIPO (Text-to-Image Prompt Optimization) introduces an efficient approach for
automatic prompt refinement in text-to-image (T2I) generation. Starting from
simple user prompts, TIPO leverages a lightweight pre-trained model to expand
these prompts into richer, detailed versions. Conceptually, TIPO samples
refined prompt... | 2024-11-12T19:09:45Z | 41 pages, 32 figures | null | null | null | null | null | null | null | null | null |
2,411.08343 | Brain Treebank: Large-scale intracranial recordings from naturalistic
language stimuli | ['Christopher Wang', 'Adam Uri Yaari', 'Aaditya K Singh', 'Vighnesh Subramaniam', 'Dana Rosenfarb', 'Jan DeWitt', 'Pranav Misra', 'Joseph R. Madsen', 'Scellig Stone', 'Gabriel Kreiman', 'Boris Katz', 'Ignacio Cases', 'Andrei Barbu'] | ['q-bio.NC'] | We present the Brain Treebank, a large-scale dataset of electrophysiological
neural responses, recorded from intracranial probes while 10 subjects watched
one or more Hollywood movies. Subjects watched on average 2.6 Hollywood movies,
for an average viewing time of 4.3 hours, and a total of 43 hours. The audio
track fo... | 2024-11-13T05:22:09Z | 36 pages, 17 figures; Accepted at NeurIPS Dataset and Benchmarks 2024 | null | null | null | null | null | null | null | null | null |
2,411.08842 | AstroM$^3$: A self-supervised multimodal model for astronomy | ['Mariia Rizhko', 'Joshua S. Bloom'] | ['astro-ph.IM', 'cs.AI'] | While machine-learned models are now routinely employed to facilitate
astronomical inquiry, model inputs tend to be limited to a primary data source
(namely images or time series) and, in the more advanced approaches, some
metadata. Yet with the growing use of wide-field, multiplexed observational
resources, individual... | 2024-11-13T18:20:29Z | null | null | null | AstroM$^3$: A self-supervised multimodal model for astronomy | ['Mariia Rizhko', 'Joshua S. Bloom'] | 2,024 | null | 1 | 0 | ['Physics', 'Computer Science'] |
2,411.08868 | CamemBERT 2.0: A Smarter French Language Model Aged to Perfection | ['Wissam Antoun', 'Francis Kulumba', 'Rian Touchent', 'Éric de la Clergerie', 'Benoît Sagot', 'Djamé Seddah'] | ['cs.CL'] | French language models, such as CamemBERT, have been widely adopted across
industries for natural language processing (NLP) tasks, with models like
CamemBERT seeing over 4 million downloads per month. However, these models face
challenges due to temporal concept drift, where outdated training data leads to
a decline in... | 2024-11-13T18:49:35Z | null | null | null | null | null | null | null | null | null | null |
2,411.08872 | Large Wireless Model (LWM): A Foundation Model for Wireless Channels | ['Sadjad Alikhani', 'Gouranga Charan', 'Ahmed Alkhateeb'] | ['cs.IT', 'eess.SP', 'math.IT'] | This paper presents Large Wireless Model (LWM) -- the world's first
foundation model for wireless channels. Designed as a task-agnostic model, LWM
generates universal, rich, contextualized channel embeddings (features) that
potentially enhance performance across a wide range of downstream tasks in
wireless communicatio... | 2024-11-13T18:51:10Z | The LWM model and relevant scripts are available on the LWM website:
https://lwm-wireless.net/ | null | null | Large Wireless Model (LWM): A Foundation Model for Wireless Channels | ['Sadjad Alikhani', 'Gouranga Charan', 'Ahmed Alkhateeb'] | 2,024 | arXiv.org | 16 | 30 | ['Computer Science', 'Engineering', 'Mathematics'] |
2,411.09009 | Cut Your Losses in Large-Vocabulary Language Models | ['Erik Wijmans', 'Brody Huval', 'Alexander Hertzberg', 'Vladlen Koltun', 'Philipp Krähenbühl'] | ['cs.LG', 'cs.CL'] | As language models grow ever larger, so do their vocabularies. This has
shifted the memory footprint of LLMs during training disproportionately to one
single layer: the cross-entropy in the loss computation. Cross-entropy builds
up a logit matrix with entries for each pair of input tokens and vocabulary
items and, for ... | 2024-11-13T20:30:15Z | To appear in ICLR 2025 (Oral). Code is available at
https://github.com/apple/ml-cross-entropy | null | null | Cut Your Losses in Large-Vocabulary Language Models | ['Erik Wijmans', 'Brody Huval', 'Alexander Hertzberg', 'V. Koltun', 'Philipp Krähenbühl'] | 2,024 | International Conference on Learning Representations | 5 | 39 | ['Computer Science'] |
2,411.09012 | AstroMLab 3: Achieving GPT-4o Level Performance in Astronomy with a
Specialized 8B-Parameter Large Language Model | ['Tijmen de Haan', 'Yuan-Sen Ting', 'Tirthankar Ghosal', 'Tuan Dung Nguyen', 'Alberto Accomazzi', 'Azton Wells', 'Nesar Ramachandra', 'Rui Pan', 'Zechang Sun'] | ['astro-ph.IM'] | AstroSage-Llama-3.1-8B is a domain-specialized natural-language AI assistant
tailored for research in astronomy, astrophysics, cosmology, and astronomical
instrumentation. Trained on the complete collection of astronomy-related arXiv
papers from 2007 to 2024 along with millions of synthetically-generated
question-answe... | 2024-11-13T20:36:02Z | null | Sci Rep 15, 13751 (2025) | 10.1038/s41598-025-97131-y | null | null | null | null | null | null | null |
2,411.09209 | JoyVASA: Portrait and Animal Image Animation with Diffusion-Based
Audio-Driven Facial Dynamics and Head Motion Generation | ['Xuyang Cao', 'Guoxin Wang', 'Sheng Shi', 'Jun Zhao', 'Yang Yao', 'Jintao Fei', 'Minyu Gao'] | ['cs.CV'] | Audio-driven portrait animation has made significant advances with
diffusion-based models, improving video quality and lipsync accuracy. However,
the increasing complexity of these models has led to inefficiencies in training
and inference, as well as constraints on video length and inter-frame
continuity. In this pape... | 2024-11-14T06:13:05Z | null | null | null | JoyVASA: Portrait and Animal Image Animation with Diffusion-Based Audio-Driven Facial Dynamics and Head Motion Generation | ['Xuyang Cao', 'Guoxin Wang', 'Sheng Shi', 'Jun Zhao', 'Yang Yao', 'Jintao Fei', 'Minyu Gao'] | 2,024 | arXiv.org | 1 | 45 | ['Computer Science'] |
2,411.0942 | SAG-ViT: A Scale-Aware, High-Fidelity Patching Approach with Graph
Attention for Vision Transformers | ['Shravan Venkatraman', 'Jaskaran Singh Walia', 'Joe Dhanith P R'] | ['cs.CV', 'cs.AI', 'cs.LG', '68T07', 'I.2.10'] | Vision Transformers (ViTs) have redefined image classification by leveraging
self-attention to capture complex patterns and long-range dependencies between
image patches. However, a key challenge for ViTs is efficiently incorporating
multi-scale feature representations, which is inherent in convolutional neural
network... | 2024-11-14T13:15:27Z | 14 pages, 8 figures, 9 tables | null | null | null | null | null | null | null | null | null |
2,411.09502 | Golden Noise for Diffusion Models: A Learning Framework | ['Zikai Zhou', 'Shitong Shao', 'Lichen Bai', 'Zhiqiang Xu', 'Bo Han', 'Zeke Xie'] | ['cs.LG', 'cs.CV'] | Text-to-image diffusion model is a popular paradigm that synthesizes
personalized images by providing a text prompt and a random Gaussian noise.
While people observe that some noises are ``golden noises'' that can achieve
better text-image alignment and higher human preference than others, we still
lack a machine learn... | 2024-11-14T15:13:13Z | null | null | null | Golden Noise for Diffusion Models: A Learning Framework | ['Zikai Zhou', 'Shitong Shao', 'Lichen Bai', 'Zhiqiang Xu', 'Bo Han', 'Zeke Xie'] | 2,024 | arXiv.org | 19 | 39 | ['Computer Science'] |
2,411.09595 | LLaMA-Mesh: Unifying 3D Mesh Generation with Language Models | ['Zhengyi Wang', 'Jonathan Lorraine', 'Yikai Wang', 'Hang Su', 'Jun Zhu', 'Sanja Fidler', 'Xiaohui Zeng'] | ['cs.LG', 'cs.AI', 'cs.CL', 'cs.CV', '68T05', 'I.3.5; I.2.10; I.2.6'] | This work explores expanding the capabilities of large language models (LLMs)
pretrained on text to generate 3D meshes within a unified model. This offers
key advantages of (1) leveraging spatial knowledge already embedded in LLMs,
derived from textual sources like 3D tutorials, and (2) enabling conversational
3D gener... | 2024-11-14T17:08:23Z | See the project website at
https://research.nvidia.com/labs/toronto-ai/LLaMA-Mesh/ | null | null | null | null | null | null | null | null | null |
2,411.09703 | MagicQuill: An Intelligent Interactive Image Editing System | ['Zichen Liu', 'Yue Yu', 'Hao Ouyang', 'Qiuyu Wang', 'Ka Leong Cheng', 'Wen Wang', 'Zhiheng Liu', 'Qifeng Chen', 'Yujun Shen'] | ['cs.CV'] | Image editing involves a variety of complex tasks and requires efficient and
precise manipulation techniques. In this paper, we present MagicQuill, an
integrated image editing system that enables swift actualization of creative
ideas. Our system features a streamlined yet functionally robust interface,
allowing for the... | 2024-11-14T18:59:57Z | Accepted to CVPR 2025. Code and demo available at
https://magic-quill.github.io | null | null | null | null | null | null | null | null | null |
2,411.09943 | Zero-shot Voice Conversion with Diffusion Transformers | ['Songting Liu'] | ['cs.SD', 'cs.LG', 'eess.AS'] | Zero-shot voice conversion aims to transform a source speech utterance to
match the timbre of a reference speech from an unseen speaker. Traditional
approaches struggle with timbre leakage, insufficient timbre representation,
and mismatches between training and inference tasks. We propose Seed-VC, a
novel framework tha... | 2024-11-15T04:43:44Z | null | null | null | null | null | null | null | null | null | null |
2,411.10027 | XLSR-Mamba: A Dual-Column Bidirectional State Space Model for Spoofing
Attack Detection | ['Yang Xiao', 'Rohan Kumar Das'] | ['eess.AS', 'cs.SD'] | Transformers and their variants have achieved great success in speech
processing. However, their multi-head self-attention mechanism is
computationally expensive. Therefore, one novel selective state space model,
Mamba, has been proposed as an alternative. Building on its success in
automatic speech recognition, we app... | 2024-11-15T08:13:51Z | Accepted by IEEE Signal Processing Letters | null | null | null | null | null | null | null | null | null |
2,411.10061 | EchoMimicV2: Towards Striking, Simplified, and Semi-Body Human Animation | ['Rang Meng', 'Xingyu Zhang', 'Yuming Li', 'Chenguang Ma'] | ['cs.GR', 'cs.CV'] | Recent work on human animation usually involves audio, pose, or movement maps
conditions, thereby achieves vivid animation quality. However, these methods
often face practical challenges due to extra control conditions, cumbersome
condition injection modules, or limitation to head region driving. Hence, we
ask if it is... | 2024-11-15T09:23:18Z | CVPR2025 | null | null | null | null | null | null | null | null | null |
2,411.10083 | Xmodel-1.5: An 1B-scale Multilingual LLM | ['Wang Qun', 'Liu Yang', 'Lin Qingquan', 'Jiang Ling'] | ['cs.CL'] | We introduce Xmodel-1.5, a 1-billion-parameter multilingual large language
model pretrained on 2 trillion tokens, designed for balanced performance and
scalability. Unlike most large models that use the BPE tokenizer, Xmodel-1.5
employs a custom unigram tokenizer with 65,280 tokens, optimizing both
efficiency and accur... | 2024-11-15T10:01:52Z | null | null | null | null | null | null | null | null | null | null |
2,411.10161 | SEAGULL: No-reference Image Quality Assessment for Regions of Interest
via Vision-Language Instruction Tuning | ['Zewen Chen', 'Juan Wang', 'Wen Wang', 'Sunhan Xu', 'Hang Xiong', 'Yun Zeng', 'Jian Guo', 'Shuxun Wang', 'Chunfeng Yuan', 'Bing Li', 'Weiming Hu'] | ['cs.CV'] | Existing Image Quality Assessment (IQA) methods achieve remarkable success in
analyzing quality for overall image, but few works explore quality analysis for
Regions of Interest (ROIs). The quality analysis of ROIs can provide
fine-grained guidance for image quality improvement and is crucial for
scenarios focusing on ... | 2024-11-15T13:07:22Z | null | null | null | null | null | null | null | null | null | null |
2,411.10414 | Llama Guard 3 Vision: Safeguarding Human-AI Image Understanding
Conversations | ['Jianfeng Chi', 'Ujjwal Karn', 'Hongyuan Zhan', 'Eric Smith', 'Javier Rando', 'Yiming Zhang', 'Kate Plawiak', 'Zacharie Delpierre Coudert', 'Kartikeya Upasani', 'Mahesh Pasupuleti'] | ['cs.CV', 'cs.CL'] | We introduce Llama Guard 3 Vision, a multimodal LLM-based safeguard for
human-AI conversations that involves image understanding: it can be used to
safeguard content for both multimodal LLM inputs (prompt classification) and
outputs (response classification). Unlike the previous text-only Llama Guard
versions (Inan et ... | 2024-11-15T18:34:07Z | null | null | null | null | null | null | null | null | null | null |
2,411.10433 | M-VAR: Decoupled Scale-wise Autoregressive Modeling for High-Quality
Image Generation | ['Sucheng Ren', 'Yaodong Yu', 'Nataniel Ruiz', 'Feng Wang', 'Alan Yuille', 'Cihang Xie'] | ['cs.CV'] | There exists recent work in computer vision, named VAR, that proposes a new
autoregressive paradigm for image generation. Diverging from the vanilla
next-token prediction, VAR structurally reformulates the image generation into
a coarse to fine next-scale prediction. In this paper, we show that this
scale-wise autoregr... | 2024-11-15T18:54:42Z | null | null | null | null | null | null | null | null | null | null |
2,411.10438 | MARS: Unleashing the Power of Variance Reduction for Training Large
Models | ['Huizhuo Yuan', 'Yifeng Liu', 'Shuang Wu', 'Xun Zhou', 'Quanquan Gu'] | ['cs.LG', 'math.OC', 'stat.ML'] | Training deep neural networks--and more recently, large models demands
efficient and scalable optimizers. Adaptive gradient algorithms like Adam,
AdamW, and their variants have been central to this task. Despite the
development of numerous variance reduction algorithms in the past decade aimed
at accelerating stochasti... | 2024-11-15T18:57:39Z | 35 pages, 19 figures, 12 tables | null | null | null | null | null | null | null | null | null |
2,411.1044 | LLaVA-CoT: Let Vision Language Models Reason Step-by-Step | ['Guowei Xu', 'Peng Jin', 'Ziang Wu', 'Hao Li', 'Yibing Song', 'Lichao Sun', 'Li Yuan'] | ['cs.CV'] | Large language models have demonstrated substantial advancements in reasoning
capabilities. However, current Vision-Language Models (VLMs) often struggle to
perform systematic and structured reasoning, especially when handling complex
visual question-answering tasks. In this work, we introduce LLaVA-CoT, a large
VLM de... | 2024-11-15T18:58:31Z | 17 pages, ICCV 2025 | null | null | null | null | null | null | null | null | null |
2,411.10442 | Enhancing the Reasoning Ability of Multimodal Large Language Models via
Mixed Preference Optimization | ['Weiyun Wang', 'Zhe Chen', 'Wenhai Wang', 'Yue Cao', 'Yangzhou Liu', 'Zhangwei Gao', 'Jinguo Zhu', 'Xizhou Zhu', 'Lewei Lu', 'Yu Qiao', 'Jifeng Dai'] | ['cs.CL', 'cs.CV'] | Existing open-source multimodal large language models (MLLMs) generally
follow a training process involving pre-training and supervised fine-tuning.
However, these models suffer from distribution shifts, which limit their
multimodal reasoning, particularly in the Chain-of-Thought (CoT) performance.
To address this, we ... | 2024-11-15T18:59:27Z | null | null | null | null | null | null | null | null | null | null |
2,411.10499 | FitDiT: Advancing the Authentic Garment Details for High-fidelity
Virtual Try-on | ['Boyuan Jiang', 'Xiaobin Hu', 'Donghao Luo', 'Qingdong He', 'Chengming Xu', 'Jinlong Peng', 'Jiangning Zhang', 'Chengjie Wang', 'Yunsheng Wu', 'Yanwei Fu'] | ['cs.CV'] | Although image-based virtual try-on has made considerable progress, emerging
approaches still encounter challenges in producing high-fidelity and robust
fitting images across diverse scenarios. These methods often struggle with
issues such as texture-aware maintenance and size-aware fitting, which hinder
their overall ... | 2024-11-15T11:02:23Z | Project page: https://byjiang.com/FitDiT/ | null | null | null | null | null | null | null | null | null |
2,411.10501 | OnlyFlow: Optical Flow based Motion Conditioning for Video Diffusion
Models | ['Mathis Koroglu', 'Hugo Caselles-Dupré', 'Guillaume Jeanneret Sanmiguel', 'Matthieu Cord'] | ['cs.CV', 'cs.LG'] | We consider the problem of text-to-video generation tasks with precise
control for various applications such as camera movement control and
video-to-video editing. Most methods tacking this problem rely on providing
user-defined controls, such as binary masks or camera movement embeddings. In
our approach we propose On... | 2024-11-15T11:19:25Z | null | null | null | null | null | null | null | null | null | null |
2,411.10818 | FlipSketch: Flipping Static Drawings to Text-Guided Sketch Animations | ['Hmrishav Bandyopadhyay', 'Yi-Zhe Song'] | ['cs.GR', 'cs.CV'] | Sketch animations offer a powerful medium for visual storytelling, from
simple flip-book doodles to professional studio productions. While traditional
animation requires teams of skilled artists to draw key frames and in-between
frames, existing automation attempts still demand significant artistic effort
through preci... | 2024-11-16T14:53:03Z | Code: https://github.com/hmrishavbandy/FlipSketch | null | null | FlipSketch: Flipping Static Drawings to Text-Guided Sketch Animations | ['Hmrishav Bandyopadhyay', 'Yi-Zhe Song'] | 2,024 | arXiv.org | 3 | 68 | ['Computer Science'] |
2,411.10958 | SageAttention2: Efficient Attention with Thorough Outlier Smoothing and
Per-thread INT4 Quantization | ['Jintao Zhang', 'Haofeng Huang', 'Pengle Zhang', 'Jia Wei', 'Jun Zhu', 'Jianfei Chen'] | ['cs.LG', 'cs.AI', 'cs.CV', 'cs.NE', 'cs.PF'] | Although quantization for linear layers has been widely used, its application
to accelerate the attention process remains limited. To further enhance the
efficiency of attention computation compared to SageAttention while maintaining
precision, we propose SageAttention2, which utilizes significantly faster 4-bit
matrix... | 2024-11-17T04:35:49Z | @inproceedings{zhang2024sageattention2, title={Sageattention2:
Efficient attention with thorough outlier smoothing and per-thread int4
quantization}, author={Zhang, Jintao and Huang, Haofeng and Zhang, Pengle and
Wei, Jia and Zhu, Jun and Chen, Jianfei}, booktitle={International Conference
on Machine Learning (... | Proceedings of the 42nd International Conference on Machine
Learning, PMLR 267, 2025 (ICML 2025) | null | SageAttention2: Efficient Attention with Thorough Outlier Smoothing and Per-thread INT4 Quantization | ['Jintao Zhang', 'Haofeng Huang', 'Pengle Zhang', 'Jia Wei', 'Jun Zhu', 'Jianfei Chen'] | 2,024 | null | 28 | 71 | ['Computer Science'] |
2,411.11027 | BianCang: A Traditional Chinese Medicine Large Language Model | ['Sibo Wei', 'Xueping Peng', 'Yi-fei Wang', 'Jiasheng Si', 'Weiyu Zhang', 'Wenpeng Lu', 'Xiaoming Wu', 'Yinglong Wang'] | ['cs.CL', 'cs.AI'] | The rise of large language models (LLMs) has driven significant progress in
medical applications, including traditional Chinese medicine (TCM). However,
current medical LLMs struggle with TCM diagnosis and syndrome differentiation
due to substantial differences between TCM and modern medical theory, and the
scarcity of... | 2024-11-17T10:17:01Z | null | null | null | BianCang: A Traditional Chinese Medicine Large Language Model | ['Sibo Wei', 'Xueping Peng', 'Yi-fei Wang', 'Jiasheng Si', 'Weiyu Zhang', 'Wenpeng Lu', 'Xiaoming Wu', 'Yinglong Wang'] | 2,024 | arXiv.org | 4 | 31 | ['Computer Science'] |
2,411.11045 | StableV2V: Stablizing Shape Consistency in Video-to-Video Editing | ['Chang Liu', 'Rui Li', 'Kaidong Zhang', 'Yunwei Lan', 'Dong Liu'] | ['cs.CV'] | Recent advancements of generative AI have significantly promoted content
creation and editing, where prevailing studies further extend this exciting
progress to video editing. In doing so, these studies mainly transfer the
inherent motion patterns from the source videos to the edited ones, where
results with inferior c... | 2024-11-17T11:48:01Z | Project page: https://alonzoleeeooo.github.io/StableV2V, code:
https://github.com/AlonzoLeeeooo/StableV2V, model weights:
https://huggingface.co/AlonzoLeeeooo/StableV2V, dataset (DAVIS-Edit):
https://huggingface.co/datasets/AlonzoLeeeooo/DAVIS-Edit | null | null | null | null | null | null | null | null | null |
2,411.11055 | FastDraft: How to Train Your Draft | ['Ofir Zafrir', 'Igor Margulis', 'Dorin Shteyman', 'Shira Guskin', 'Guy Boudoukh'] | ['cs.CL'] | Speculative Decoding has gained popularity as an effective technique for
accelerating the auto-regressive inference process of Large Language Models.
However, Speculative Decoding entirely relies on the availability of efficient
draft models, which are often lacking for many existing language models due to
a stringent ... | 2024-11-17T12:32:44Z | Accepted at ACL 2025 | null | null | null | null | null | null | null | null | null |
2,411.11098 | MolParser: End-to-end Visual Recognition of Molecule Structures in the
Wild | ['Xi Fang', 'Jiankun Wang', 'Xiaochen Cai', 'Shangqian Chen', 'Shuwen Yang', 'Haoyi Tao', 'Nan Wang', 'Lin Yao', 'Linfeng Zhang', 'Guolin Ke'] | ['cs.CV'] | In recent decades, chemistry publications and patents have increased rapidly.
A significant portion of key information is embedded in molecular structure
figures, complicating large-scale literature searches and limiting the
application of large language models in fields such as biology, chemistry, and
pharmaceuticals.... | 2024-11-17T15:00:09Z | null | null | null | MolParser: End-to-end Visual Recognition of Molecule Structures in the Wild | ['Xi Fang', 'Jiankun Wang', 'Xiaochen Cai', 'Shangqian Chen', 'Shuwen Yang', 'Lin Yao', 'Linfeng Zhang', 'Guolin Ke'] | 2,024 | arXiv.org | 2 | 65 | ['Computer Science'] |
2,411.11171 | LLäMmlein: Transparent, Compact and Competitive German-Only Language
Models from Scratch | ['Jan Pfister', 'Julia Wunderle', 'Andreas Hotho'] | ['cs.CL', 'cs.AI', 'cs.LG'] | We create two German-only decoder models, LL\"aMmlein 120M and 1B,
transparently from scratch and publish them, along with the training data, for
the German NLP research community to use. The model training involved several
key steps, including extensive data preprocessing, the creation of a custom
German tokenizer, th... | 2024-11-17T20:44:34Z | camera ready @ACL25;
https://www.informatik.uni-wuerzburg.de/datascience/projects/nlp/llammlein/ | null | null | LL\"aMmlein: Transparent, Compact and Competitive German-Only Language Models from Scratch | ['Jan Pfister', 'Julia Wunderle', 'Andreas Hotho', 'Abhimanyu Dubey', 'Abhinav Jauhri', 'Abhinav Pandey', 'Abhishek Kadian', 'Ahmad Al-Dahle', 'Aiesha Letman', 'Akhil Mathur', 'Alan Schelten', 'Amy Yang', 'Angela Fan', 'Anirudh Goyal', 'A. Hartshorn', 'Aobo Yang', 'Archi Mitra', 'A. Sravankumar', 'A. Korenev', 'Arthur ... | 2,024 | null | 0 | 44 | ['Computer Science'] |
2,411.11222 | The Sound of Water: Inferring Physical Properties from Pouring Liquids | ['Piyush Bagad', 'Makarand Tapaswi', 'Cees G. M. Snoek', 'Andrew Zisserman'] | ['cs.CV', 'cs.MM', 'cs.SD', 'eess.AS'] | We study the connection between audio-visual observations and the underlying
physics of a mundane yet intriguing everyday activity: pouring liquids. Given
only the sound of liquid pouring into a container, our objective is to
automatically infer physical properties such as the liquid level, the shape and
size of the co... | 2024-11-18T01:19:37Z | Project page at https://bpiyush.github.io/pouring-water-website.
Short version accepted to ICASSP 2025 | null | null | null | null | null | null | null | null | null |
2,411.11231 | BeautyBank: Encoding Facial Makeup in Latent Space | ['Qianwen Lu', 'Xingchao Yang', 'Takafumi Taketomi'] | ['cs.CV'] | The advancement of makeup transfer, editing, and image encoding has
demonstrated their effectiveness and superior quality. However, existing makeup
works primarily focus on low-dimensional features such as color distributions
and patterns, limiting their versatillity across a wide range of makeup
applications. Futhermo... | 2024-11-18T01:52:31Z | null | null | null | null | null | null | null | null | null | null |
2,411.11694 | Enhancing LLM Reasoning with Reward-guided Tree Search | ['Jinhao Jiang', 'Zhipeng Chen', 'Yingqian Min', 'Jie Chen', 'Xiaoxue Cheng', 'Jiapeng Wang', 'Yiru Tang', 'Haoxiang Sun', 'Jia Deng', 'Wayne Xin Zhao', 'Zheng Liu', 'Dong Yan', 'Jian Xie', 'Zhongyuan Wang', 'Ji-Rong Wen'] | ['cs.CL', 'cs.AI'] | Recently, test-time scaling has garnered significant attention from the
research community, largely due to the substantial advancements of the o1 model
released by OpenAI. By allocating more computational resources during the
inference phase, large language models~(LLMs) can extensively explore the
solution space by ge... | 2024-11-18T16:15:17Z | Technical Report on Slow Thinking with LLMs: I | null | null | null | null | null | null | null | null | null |
2,411.11736 | Advacheck at GenAI Detection Task 1: AI Detection Powered by
Domain-Aware Multi-Tasking | ['German Gritsai', 'Anastasia Voznyuk', 'Ildar Khabutdinov', 'Andrey Grabovoy'] | ['cs.CL'] | The paper describes a system designed by Advacheck team to recognise
machine-generated and human-written texts in the monolingual subtask of GenAI
Detection Task 1 competition. Our developed system is a multi-task architecture
with shared Transformer Encoder between several classification heads. One head
is responsible... | 2024-11-18T17:03:30Z | null | null | null | null | null | null | null | null | null | null |
2,411.11758 | The Power of Many: Multi-Agent Multimodal Models for Cultural Image
Captioning | ['Longju Bai', 'Angana Borah', 'Oana Ignat', 'Rada Mihalcea'] | ['cs.CV', 'cs.AI', 'cs.CL'] | Large Multimodal Models (LMMs) exhibit impressive performance across various
multimodal tasks. However, their effectiveness in cross-cultural contexts
remains limited due to the predominantly Western-centric nature of most data
and models. Conversely, multi-agent models have shown significant capability in
solving comp... | 2024-11-18T17:37:10Z | null | null | null | null | null | null | null | null | null | null |
2,411.1177 | CNMBERT: A Model for Converting Hanyu Pinyin Abbreviations to Chinese
Characters | ['Zishuo Feng', 'Feng Cao'] | ['cs.CL', 'cs.AI'] | The task of converting Hanyu Pinyin abbreviations to Chinese characters is a
significant branch within the domain of Chinese Spelling Correction (CSC). It
plays an important role in many downstream applications such as named entity
recognition and sentiment analysis. This task typically involves text-length
alignment a... | 2024-11-18T17:50:34Z | 8 pages, 5 figures, 8 tables | null | null | null | null | null | null | null | null | null |
2,411.11916 | From Words to Structured Visuals: A Benchmark and Framework for
Text-to-Diagram Generation and Editing | ['Jingxuan Wei', 'Cheng Tan', 'Qi Chen', 'Gaowei Wu', 'Siyuan Li', 'Zhangyang Gao', 'Linzhuang Sun', 'Bihui Yu', 'Ruifeng Guo'] | ['cs.DB'] | We introduce the task of text-to-diagram generation, which focuses on
creating structured visual representations directly from textual descriptions.
Existing approaches in text-to-image and text-to-code generation lack the
logical organization and flexibility needed to produce accurate, editable
diagrams, often resulti... | 2024-11-18T02:58:37Z | null | null | null | null | null | null | null | null | null | null |
2,411.11927 | FLAME: Frozen Large Language Models Enable Data-Efficient Language-Image
Pre-training | ['Anjia Cao', 'Xing Wei', 'Zhiheng Ma'] | ['cs.CV'] | Language-image pre-training faces significant challenges due to limited data
in specific formats and the constrained capacities of text encoders. While
prevailing methods attempt to address these issues through data augmentation
and architecture modifications, they continue to struggle with processing
long-form text in... | 2024-11-18T09:19:30Z | null | null | null | null | null | null | null | null | null | null |
2,411.1193 | AtomThink: A Slow Thinking Framework for Multimodal Mathematical
Reasoning | ['Kun Xiang', 'Zhili Liu', 'Zihao Jiang', 'Yunshuang Nie', 'Runhui Huang', 'Haoxiang Fan', 'Hanhui Li', 'Weiran Huang', 'Yihan Zeng', 'Jianhua Han', 'Lanqing Hong', 'Hang Xu', 'Xiaodan Liang'] | ['cs.CV', 'cs.AI'] | In this paper, we address the challenging task of multimodal mathematical
reasoning by incorporating the ability of ``slow thinking" into multimodal
large language models (MLLMs). Contrary to existing methods that rely on direct
or fast thinking, our key idea is to construct long chains of thought (CoT)
consisting of a... | 2024-11-18T11:54:58Z | null | null | null | null | null | null | null | null | null | null |
2,411.12644 | CodeXEmbed: A Generalist Embedding Model Family for Multiligual and
Multi-task Code Retrieval | ['Ye Liu', 'Rui Meng', 'Shafiq Joty', 'Silvio Savarese', 'Caiming Xiong', 'Yingbo Zhou', 'Semih Yavuz'] | ['cs.SE', 'cs.AI'] | Despite the success of text retrieval in many NLP tasks, code retrieval
remains a largely underexplored area. Most text retrieval systems are tailored
for natural language queries, often neglecting the specific challenges of
retrieving code. This gap leaves existing models unable to effectively capture
the diversity of... | 2024-11-19T16:54:45Z | null | null | null | CodeXEmbed: A Generalist Embedding Model Family for Multiligual and Multi-task Code Retrieval | ['Ye Liu', 'Rui Meng', 'Shafiq Joty', 'Silvio Savarese', 'Caiming Xiong', 'Yingbo Zhou', 'Semih Yavuz'] | 2,024 | arXiv.org | 8 | 63 | ['Computer Science'] |
2,411.12811 | Stylecodes: Encoding Stylistic Information For Image Generation | ['Ciara Rowles'] | ['cs.CV'] | Diffusion models excel in image generation, but controlling them remains a
challenge. We focus on the problem of style-conditioned image generation.
Although example images work, they are cumbersome: srefs (style-reference
codes) from MidJourney solve this issue by expressing a specific image style in
a short numeric c... | 2024-11-19T19:04:31Z | code: https://github.com/CiaraStrawberry/stylecodes project page:
https://ciarastrawberry.github.io/stylecodes.github.io/. arXiv admin note:
substantial text overlap with arXiv:2408.03209 | null | null | Stylecodes: Encoding Stylistic Information For Image Generation | ['Ciara Rowles'] | 2,024 | arXiv.org | 0 | 0 | ['Computer Science'] |
2,411.12925 | Loss-to-Loss Prediction: Scaling Laws for All Datasets | ['David Brandfonbrener', 'Nikhil Anand', 'Nikhil Vyas', 'Eran Malach', 'Sham Kakade'] | ['cs.LG', 'cs.AI', 'cs.CL', 'stat.ML'] | While scaling laws provide a reliable methodology for predicting train loss
across compute scales for a single data distribution, less is known about how
these predictions should change as we change the distribution. In this paper,
we derive a strategy for predicting one loss from another and apply it to
predict across... | 2024-11-19T23:23:16Z | null | null | null | null | null | null | null | null | null | null |
2,411.12946 | A Flexible Large Language Models Guardrail Development Methodology
Applied to Off-Topic Prompt Detection | ['Gabriel Chua', 'Shing Yee Chan', 'Shaun Khoo'] | ['cs.CL', 'cs.LG', '68T50', 'I.2.7'] | Large Language Models (LLMs) are prone to off-topic misuse, where users may
prompt these models to perform tasks beyond their intended scope. Current
guardrails, which often rely on curated examples or custom classifiers, suffer
from high false-positive rates, limited adaptability, and the impracticality of
requiring r... | 2024-11-20T00:31:23Z | 8 pages, 5 figures | null | null | null | null | null | null | null | null | null |
2,411.12951 | On the Consistency of Video Large Language Models in Temporal
Comprehension | ['Minjoon Jung', 'Junbin Xiao', 'Byoung-Tak Zhang', 'Angela Yao'] | ['cs.CV'] | Video large language models (Video-LLMs) can temporally ground language
queries and retrieve video moments. Yet, such temporal comprehension
capabilities are neither well-studied nor understood. So we conduct a study on
prediction consistency -- a key indicator for robustness and trustworthiness of
temporal grounding. ... | 2024-11-20T00:47:17Z | Accepted to CVPR'25 | null | null | null | null | null | null | null | null | null |
2,411.13127 | Adapting Vision Foundation Models for Robust Cloud Segmentation in
Remote Sensing Images | ['Xuechao Zou', 'Shun Zhang', 'Kai Li', 'Shiying Wang', 'Junliang Xing', 'Lei Jin', 'Congyan Lang', 'Pin Tao'] | ['cs.CV'] | Cloud segmentation is a critical challenge in remote sensing image
interpretation, as its accuracy directly impacts the effectiveness of
subsequent data processing and analysis. Recently, vision foundation models
(VFM) have demonstrated powerful generalization capabilities across various
visual tasks. In this paper, we... | 2024-11-20T08:37:39Z | 13 pages, 9 figures | null | null | Adapting Vision Foundation Models for Robust Cloud Segmentation in Remote Sensing Images | ['Xuechao Zou', 'Shun Zhang', 'Kai Li', 'Shiying Wang', 'Jun Xing', 'Lei Jin', 'Congyan Lang', 'Pin Tao'] | 2,024 | arXiv.org | 1 | 64 | ['Computer Science'] |
2,411.1328 | Empower Structure-Based Molecule Optimization with Gradient Guided
Bayesian Flow Networks | ['Keyue Qiu', 'Yuxuan Song', 'Jie Yu', 'Hongbo Ma', 'Ziyao Cao', 'Zhilong Zhang', 'Yushuai Wu', 'Mingyue Zheng', 'Hao Zhou', 'Wei-Ying Ma'] | ['q-bio.BM', 'cs.AI'] | Structure-Based molecule optimization (SBMO) aims to optimize molecules with
both continuous coordinates and discrete types against protein targets. A
promising direction is to exert gradient guidance on generative models given
its remarkable success in images, but it is challenging to guide discrete data
and risks inc... | 2024-11-20T12:48:29Z | Accepted to ICML 2025 | null | null | null | null | null | null | null | null | null |
2,411.13383 | Adversarial Diffusion Compression for Real-World Image Super-Resolution | ['Bin Chen', 'Gehui Li', 'Rongyuan Wu', 'Xindong Zhang', 'Jie Chen', 'Jian Zhang', 'Lei Zhang'] | ['eess.IV', 'cs.CV'] | Real-world image super-resolution (Real-ISR) aims to reconstruct
high-resolution images from low-resolution inputs degraded by complex, unknown
processes. While many Stable Diffusion (SD)-based Real-ISR methods have
achieved remarkable success, their slow, multi-step inference hinders practical
deployment. Recent SD-ba... | 2024-11-20T15:13:36Z | Accepted by CVPR 2025 | null | null | null | null | null | null | null | null | null |
2,411.13476 | When Precision Meets Position: BFloat16 Breaks Down RoPE in Long-Context
Training | ['Haonan Wang', 'Qian Liu', 'Chao Du', 'Tongyao Zhu', 'Cunxiao Du', 'Kenji Kawaguchi', 'Tianyu Pang'] | ['cs.CL'] | Extending context window sizes allows large language models (LLMs) to process
longer sequences and handle more complex tasks. Rotary Positional Embedding
(RoPE) has become the de facto standard due to its relative positional encoding
properties that benefit long-context training. However, we observe that using
RoPE wit... | 2024-11-20T17:22:31Z | null | null | null | null | null | null | null | null | null | null |
2,411.13503 | VBench++: Comprehensive and Versatile Benchmark Suite for Video
Generative Models | ['Ziqi Huang', 'Fan Zhang', 'Xiaojie Xu', 'Yinan He', 'Jiashuo Yu', 'Ziyue Dong', 'Qianli Ma', 'Nattapol Chanpaisit', 'Chenyang Si', 'Yuming Jiang', 'Yaohui Wang', 'Xinyuan Chen', 'Ying-Cong Chen', 'Limin Wang', 'Dahua Lin', 'Yu Qiao', 'Ziwei Liu'] | ['cs.CV'] | Video generation has witnessed significant advancements, yet evaluating these
models remains a challenge. A comprehensive evaluation benchmark for video
generation is indispensable for two reasons: 1) Existing metrics do not fully
align with human perceptions; 2) An ideal evaluation system should provide
insights to in... | 2024-11-20T17:54:41Z | Leaderboard:
https://huggingface.co/spaces/Vchitect/VBench_Leaderboard Code:
https://github.com/Vchitect/VBench Project page:
https://vchitect.github.io/VBench-project/ extension of arXiv:2311.17982.
arXiv admin note: substantial text overlap with arXiv:2311.17982 | null | null | null | null | null | null | null | null | null |
2,411.1355 | Find Any Part in 3D | ['Ziqi Ma', 'Yisong Yue', 'Georgia Gkioxari'] | ['cs.CV'] | Why don't we have foundation models in 3D yet? A key limitation is data
scarcity. For 3D object part segmentation, existing datasets are small in size
and lack diversity. We show that it is possible to break this data barrier by
building a data engine powered by 2D foundation models. Our data engine
automatically annot... | 2024-11-20T18:59:01Z | Project website: https://ziqi-ma.github.io/find3dsite/ | null | null | Find Any Part in 3D | ['Ziqi Ma', 'Yisong Yue', 'Georgia Gkioxari'] | 2,024 | arXiv.org | 5 | 41 | ['Computer Science'] |
2,411.13552 | REDUCIO! Generating 1024$\times$1024 Video within 16 Seconds using
Extremely Compressed Motion Latents | ['Rui Tian', 'Qi Dai', 'Jianmin Bao', 'Kai Qiu', 'Yifan Yang', 'Chong Luo', 'Zuxuan Wu', 'Yu-Gang Jiang'] | ['cs.CV'] | Commercial video generation models have exhibited realistic, high-fidelity
results but are still restricted to limited access. One crucial obstacle for
large-scale applications is the expensive training and inference cost. In this
paper, we argue that videos contain much more redundant information than
images, thus can... | 2024-11-20T18:59:52Z | Code available at https://github.com/microsoft/Reducio-VAE | null | null | null | null | null | null | null | null | null |
2,411.13623 | Unsupervised Foundation Model-Agnostic Slide-Level Representation
Learning | ['Tim Lenz', 'Peter Neidlinger', 'Marta Ligero', 'Georg Wölflein', 'Marko van Treeck', 'Jakob Nikolas Kather'] | ['cs.CV'] | Representation learning of pathology whole-slide images (WSIs) has primarily
relied on weak supervision with Multiple Instance Learning (MIL). This approach
leads to slide representations highly tailored to a specific clinical task.
Self-supervised learning (SSL) has been successfully applied to train
histopathology fo... | 2024-11-20T13:12:43Z | Got accepted at CVPR 2025 | null | null | Unsupervised Foundation Model-Agnostic Slide-Level Representation Learning | ['Tim Lenz', 'P. Neidlinger', 'Marta Ligero', 'Georg Wölflein', 'M. Treeck', 'J. Kather'] | 2,024 | arXiv.org | 3 | 48 | ['Computer Science'] |
2,411.13632 | ID-Patch: Robust ID Association for Group Photo Personalization | ['Yimeng Zhang', 'Tiancheng Zhi', 'Jing Liu', 'Shen Sang', 'Liming Jiang', 'Qing Yan', 'Sijia Liu', 'Linjie Luo'] | ['cs.CV'] | The ability to synthesize personalized group photos and specify the positions
of each identity offers immense creative potential. While such imagery can be
visually appealing, it presents significant challenges for existing
technologies. A persistent issue is identity (ID) leakage, where injected
facial features interf... | 2024-11-20T18:55:28Z | Accepted by CVPR 2025. Project Page is:
https://byteaigc.github.io/ID-Patch/ | null | null | ID-Patch: Robust ID Association for Group Photo Personalization | ['Yimeng Zhang', 'Tiancheng Zhi', 'Jing Liu', 'Shen Sang', 'Liming Jiang', 'Qing Yan', 'Sijia Liu', 'Linjie Luo'] | 2,024 | arXiv.org | 4 | 0 | ['Computer Science'] |
2,411.13676 | Hymba: A Hybrid-head Architecture for Small Language Models | ['Xin Dong', 'Yonggan Fu', 'Shizhe Diao', 'Wonmin Byeon', 'Zijia Chen', 'Ameya Sunil Mahabaleshwarkar', 'Shih-Yang Liu', 'Matthijs Van Keirsbilck', 'Min-Hung Chen', 'Yoshi Suhara', 'Yingyan Lin', 'Jan Kautz', 'Pavlo Molchanov'] | ['cs.CL', 'cs.AI', 'cs.LG'] | We propose Hymba, a family of small language models featuring a hybrid-head
parallel architecture that integrates transformer attention mechanisms with
state space models (SSMs) for enhanced efficiency. Attention heads provide
high-resolution recall, while SSM heads enable efficient context summarization.
Additionally,... | 2024-11-20T19:51:25Z | 20 pages, models are available on huggingface | null | null | Hymba: A Hybrid-head Architecture for Small Language Models | ['Xin Dong', 'Y. Fu', 'Shizhe Diao', 'Wonmin Byeon', 'Zijia Chen', 'A. Mahabaleshwarkar', 'Shih-Yang Liu', 'Matthijs Van Keirsbilck', 'Min-Hung Chen', 'Yoshi Suhara', 'Y. Lin', 'Jan Kautz', 'Pavlo Molchanov'] | 2,024 | arXiv.org | 27 | 80 | ['Computer Science'] |
2,411.13807 | MagicDrive-V2: High-Resolution Long Video Generation for Autonomous
Driving with Adaptive Control | ['Ruiyuan Gao', 'Kai Chen', 'Bo Xiao', 'Lanqing Hong', 'Zhenguo Li', 'Qiang Xu'] | ['cs.CV'] | The rapid advancement of diffusion models has greatly improved video
synthesis, especially in controllable video generation, which is vital for
applications like autonomous driving. Although DiT with 3D VAE has become a
standard framework for video generation, it introduces challenges in
controllable driving video gene... | 2024-11-21T03:13:30Z | Project Website: https://flymin.github.io/magicdrive-v2/ | null | null | MagicDrive-V2: High-Resolution Long Video Generation for Autonomous Driving with Adaptive Control | ['Ruiyuan Gao', 'Kai Chen', 'Bo Xiao', 'Lanqing Hong', 'Zhenguo Li', 'Qiang Xu'] | 2,024 | null | 12 | 45 | ['Computer Science'] |
2,411.14073 | Meaning at the Planck scale? Contextualized word embeddings for doing
history, philosophy, and sociology of science | ['Arno Simons'] | ['cs.CL', 'physics.hist-ph', 'I.2.6; I.2.7; J.4'] | This paper explores the potential of contextualized word embeddings (CWEs) as
a new tool in the history, philosophy, and sociology of science (HPSS) for
studying contextual and evolving meanings of scientific concepts. Using the
term "Planck" as a test case, I evaluate five BERT-based models with varying
degrees of dom... | 2024-11-21T12:38:23Z | 18 pages, 7 figures (1 in the Supplement) | null | null | null | null | null | null | null | null | null |
2,411.14125 | RestorerID: Towards Tuning-Free Face Restoration with ID Preservation | ['Jiacheng Ying', 'Mushui Liu', 'Zhe Wu', 'Runming Zhang', 'Zhu Yu', 'Siming Fu', 'Si-Yuan Cao', 'Chao Wu', 'Yunlong Yu', 'Hui-Liang Shen'] | ['cs.CV'] | Blind face restoration has made great progress in producing high-quality and
lifelike images. Yet it remains challenging to preserve the ID information
especially when the degradation is heavy. Current reference-guided face
restoration approaches either require face alignment or personalized
test-tuning, which are unfa... | 2024-11-21T13:50:25Z | 10 pages, 10 figures | null | null | RestorerID: Towards Tuning-Free Face Restoration with ID Preservation | ['Jiacheng Ying', 'Mushui Liu', 'Zhe Wu', 'Runmin Zhang', 'Zhu Yu', 'Siming Fu', 'Sixi Cao', 'Chao Wu', 'Yunlong Yu', 'Hui Shen'] | 2,024 | arXiv.org | 2 | 0 | ['Computer Science'] |
2,411.14251 | Natural Language Reinforcement Learning | ['Xidong Feng', 'Bo Liu', 'Yan Song', 'Haotian Fu', 'Ziyu Wan', 'Girish A. Koushik', 'Zhiyuan Hu', 'Mengyue Yang', 'Ying Wen', 'Jun Wang'] | ['cs.LG', 'cs.AI', 'cs.CL'] | Artificial intelligence progresses towards the "Era of Experience," where
agents are expected to learn from continuous, grounded interaction. We argue
that traditional Reinforcement Learning (RL), which typically represents value
as a scalar, can restrict agent's deep understanding of environments and
hinders the activ... | 2024-11-21T15:57:02Z | 10 pages | null | null | null | null | null | null | null | null | null |
2,411.14393 | POS-tagging to highlight the skeletal structure of sentences | ['Grigorii Churakov'] | ['cs.CL'] | This study presents the development of a part-of-speech (POS) tagging model
to extract the skeletal structure of sentences using transfer learning with the
BERT architecture for token classification. The model, fine-tuned on Russian
text, demonstrating its effectiveness. The approach offers potential
applications in en... | 2024-11-21T18:25:19Z | in Russian language. Conference: Automated control systems and
information technologies https://asuit.pstu.ru/ Section: IT and automated
systems | In: Proc. All-Russian Sci.-Techn. Conf. "Automated Control Systems
and Information Technologies", Perm, Russia, Jun 7-9, 2024, vol. 1, pp. 67-72 | null | null | null | null | null | null | null | null |
2,411.14402 | Multimodal Autoregressive Pre-training of Large Vision Encoders | ['Enrico Fini', 'Mustafa Shukor', 'Xiujun Li', 'Philipp Dufter', 'Michal Klein', 'David Haldimann', 'Sai Aitharaju', 'Victor Guilherme Turrisi da Costa', 'Louis Béthune', 'Zhe Gan', 'Alexander T Toshev', 'Marcin Eichner', 'Moin Nabi', 'Yinfei Yang', 'Joshua M. Susskind', 'Alaaeldin El-Nouby'] | ['cs.CV', 'cs.LG'] | We introduce a novel method for pre-training of large-scale vision encoders.
Building on recent advancements in autoregressive pre-training of vision
models, we extend this framework to a multimodal setting, i.e., images and
text. In this paper, we present AIMV2, a family of generalist vision encoders
characterized by ... | 2024-11-21T18:31:25Z | https://github.com/apple/ml-aim | null | null | null | null | null | null | null | null | null |
2,411.14405 | Marco-o1: Towards Open Reasoning Models for Open-Ended Solutions | ['Yu Zhao', 'Huifeng Yin', 'Bo Zeng', 'Hao Wang', 'Tianqi Shi', 'Chenyang Lyu', 'Longyue Wang', 'Weihua Luo', 'Kaifu Zhang'] | ['cs.CL'] | Currently OpenAI o1 sparks a surge of interest in the study of large
reasoning models (LRM). Building on this momentum, Marco-o1 not only focuses on
disciplines with standard answers, such as mathematics, physics, and coding --
which are well-suited for reinforcement learning (RL) -- but also places
greater emphasis on... | 2024-11-21T18:37:33Z | null | null | null | null | null | null | null | null | null | null |
2,411.14432 | Insight-V: Exploring Long-Chain Visual Reasoning with Multimodal Large
Language Models | ['Yuhao Dong', 'Zuyan Liu', 'Hai-Long Sun', 'Jingkang Yang', 'Winston Hu', 'Yongming Rao', 'Ziwei Liu'] | ['cs.CV'] | Large Language Models (LLMs) demonstrate enhanced capabilities and
reliability by reasoning more, evolving from Chain-of-Thought prompting to
product-level solutions like OpenAI o1. Despite various efforts to improve LLM
reasoning, high-quality long-chain reasoning data and optimized training
pipelines still remain ina... | 2024-11-21T18:59:55Z | null | null | null | Insight-V: Exploring Long-Chain Visual Reasoning with Multimodal Large Language Models | ['Yuhao Dong', 'Zuyan Liu', 'Hai-Long Sun', 'Jingkang Yang', 'Winston Hu', 'Yongming Rao', 'Ziwei Liu'] | 2,024 | Computer Vision and Pattern Recognition | 45 | 53 | ['Computer Science'] |
2,411.1474 | TEXGen: a Generative Diffusion Model for Mesh Textures | ['Xin Yu', 'Ze Yuan', 'Yuan-Chen Guo', 'Ying-Tian Liu', 'JianHui Liu', 'Yangguang Li', 'Yan-Pei Cao', 'Ding Liang', 'Xiaojuan Qi'] | ['cs.CV', 'cs.AI', 'cs.GR'] | While high-quality texture maps are essential for realistic 3D asset
rendering, few studies have explored learning directly in the texture space,
especially on large-scale datasets. In this work, we depart from the
conventional approach of relying on pre-trained 2D diffusion models for
test-time optimization of 3D text... | 2024-11-22T05:22:11Z | Accepted to SIGGRAPH Asia Journal Article (TOG 2024) | ACM Transactions on Graphics (TOG) 2024, Volume 43, Issue 6,
Article No.: 213, Pages 1-14 | 10.1145/3687909 | null | null | null | null | null | null | null |
2,411.14869 | BIP3D: Bridging 2D Images and 3D Perception for Embodied Intelligence | ['Xuewu Lin', 'Tianwei Lin', 'Lichao Huang', 'Hongyu Xie', 'Zhizhong Su'] | ['cs.CV', 'cs.AI', 'cs.LG'] | In embodied intelligence systems, a key component is 3D perception algorithm,
which enables agents to understand their surrounding environments. Previous
algorithms primarily rely on point cloud, which, despite offering precise
geometric information, still constrain perception performance due to inherent
sparsity, nois... | 2024-11-22T11:35:42Z | null | null | null | BIP3D: Bridging 2D Images and 3D Perception for Embodied Intelligence | ['Xuewu Lin', 'Tianwei Lin', 'Lichao Huang', 'Hongyu Xie', 'Zhizhong Su'] | 2,024 | Computer Vision and Pattern Recognition | 2 | 48 | ['Computer Science'] |
2,411.14877 | Astro-HEP-BERT: A bidirectional language model for studying the meanings
of concepts in astrophysics and high energy physics | ['Arno Simons'] | ['cs.CL', 'physics.hist-ph', 'I.2.6; I.2.7; J.4'] | I present Astro-HEP-BERT, a transformer-based language model specifically
designed for generating contextualized word embeddings (CWEs) to study the
meanings of concepts in astrophysics and high-energy physics. Built on a
general pretrained BERT model, Astro-HEP-BERT underwent further training over
three epochs using t... | 2024-11-22T11:59:15Z | 7 pages, 4 figures, 1 table | null | null | null | null | null | null | null | null | null |
2,411.15098 | OminiControl: Minimal and Universal Control for Diffusion Transformer | ['Zhenxiong Tan', 'Songhua Liu', 'Xingyi Yang', 'Qiaochu Xue', 'Xinchao Wang'] | ['cs.CV', 'cs.AI', 'cs.LG'] | We present OminiControl, a novel approach that rethinks how image conditions
are integrated into Diffusion Transformer (DiT) architectures. Current image
conditioning methods either introduce substantial parameter overhead or handle
only specific control tasks effectively, limiting their practical versatility.
OminiCon... | 2024-11-22T17:55:15Z | Accepted to ICCV 2025 | null | null | OminiControl: Minimal and Universal Control for Diffusion Transformer | ['Zhenxiong Tan', 'Songhua Liu', 'Xingyi Yang', 'Qiaochu Xue', 'Xinchao Wang'] | 2,024 | arXiv.org | 68 | 60 | ['Computer Science'] |
2,411.15114 | RE-Bench: Evaluating frontier AI R&D capabilities of language model
agents against human experts | ['Hjalmar Wijk', 'Tao Lin', 'Joel Becker', 'Sami Jawhar', 'Neev Parikh', 'Thomas Broadley', 'Lawrence Chan', 'Michael Chen', 'Josh Clymer', 'Jai Dhyani', 'Elena Ericheva', 'Katharyn Garcia', 'Brian Goodrich', 'Nikola Jurkovic', 'Holden Karnofsky', 'Megan Kinniment', 'Aron Lajko', 'Seraphina Nix', 'Lucas Sato', 'William... | ['cs.LG', 'cs.AI'] | Frontier AI safety policies highlight automation of AI research and
development (R&D) by AI agents as an important capability to anticipate.
However, there exist few evaluations for AI R&D capabilities, and none that are
highly realistic and have a direct comparison to human performance. We
introduce RE-Bench (Research... | 2024-11-22T18:30:46Z | null | null | null | null | null | null | null | null | null | null |
2,411.15124 | Tulu 3: Pushing Frontiers in Open Language Model Post-Training | ['Nathan Lambert', 'Jacob Morrison', 'Valentina Pyatkin', 'Shengyi Huang', 'Hamish Ivison', 'Faeze Brahman', 'Lester James V. Miranda', 'Alisa Liu', 'Nouha Dziri', 'Shane Lyu', 'Yuling Gu', 'Saumya Malik', 'Victoria Graf', 'Jena D. Hwang', 'Jiangjiang Yang', 'Ronan Le Bras', 'Oyvind Tafjord', 'Chris Wilhelm', 'Luca Sol... | ['cs.CL'] | Language model post-training is applied to refine behaviors and unlock new
skills across a wide range of recent language models, but open recipes for
applying these techniques lag behind proprietary ones. The underlying training
data and recipes for post-training are simultaneously the most important pieces
of the puzz... | 2024-11-22T18:44:04Z | Added Tulu 3 405B results and additional analyses | null | null | null | null | null | null | null | null | null |
2,411.15139 | DiffusionDrive: Truncated Diffusion Model for End-to-End Autonomous
Driving | ['Bencheng Liao', 'Shaoyu Chen', 'Haoran Yin', 'Bo Jiang', 'Cheng Wang', 'Sixu Yan', 'Xinbang Zhang', 'Xiangyu Li', 'Ying Zhang', 'Qian Zhang', 'Xinggang Wang'] | ['cs.CV', 'cs.RO'] | Recently, the diffusion model has emerged as a powerful generative technique
for robotic policy learning, capable of modeling multi-mode action
distributions. Leveraging its capability for end-to-end autonomous driving is a
promising direction. However, the numerous denoising steps in the robotic
diffusion policy and t... | 2024-11-22T18:59:47Z | Accepted to CVPR 2025 as Highlight. Code & demo & model are available
at https://github.com/hustvl/DiffusionDrive | null | null | DiffusionDrive: Truncated Diffusion Model for End-to-End Autonomous Driving | ['Bencheng Liao', 'Shaoyu Chen', 'Haoran Yin', 'Bo Jiang', 'Cheng Wang', 'Sixu Yan', 'Xinbang Zhang', 'Xiangyu Li', 'Ying Zhang', 'Qian Zhang', 'Xinggang Wang'] | 2,024 | arXiv.org | 46 | 62 | ['Computer Science'] |
2,411.15232 | BiomedCoOp: Learning to Prompt for Biomedical Vision-Language Models | ['Taha Koleilat', 'Hojat Asgariandehkordi', 'Hassan Rivaz', 'Yiming Xiao'] | ['cs.CV', 'cs.CL'] | Recent advancements in vision-language models (VLMs), such as CLIP, have
demonstrated substantial success in self-supervised representation learning for
vision tasks. However, effectively adapting VLMs to downstream applications
remains challenging, as their accuracy often depends on time-intensive and
expertise-demand... | 2024-11-21T19:13:04Z | Accepted to CVPR 2025 | null | null | null | null | null | null | null | null | null |
2,411.15241 | EfficientViM: Efficient Vision Mamba with Hidden State Mixer based State
Space Duality | ['Sanghyeok Lee', 'Joonmyung Choi', 'Hyunwoo J. Kim'] | ['cs.CV'] | For the deployment of neural networks in resource-constrained environments,
prior works have built lightweight architectures with convolution and attention
for capturing local and global dependencies, respectively. Recently, the state
space model (SSM) has emerged as an effective operation for global interaction
with i... | 2024-11-22T02:02:06Z | Conference on Computer Vision and Pattern Recognition (CVPR), 2025 | null | null | EfficientViM: Efficient Vision Mamba with Hidden State Mixer based State Space Duality | ['Sanghyeok Lee', 'Joonmyung Choi', 'Hyunwoo J. Kim'] | 2,024 | arXiv.org | 3 | 91 | ['Computer Science'] |
2,411.15269 | MambaIRv2: Attentive State Space Restoration | ['Hang Guo', 'Yong Guo', 'Yaohua Zha', 'Yulun Zhang', 'Wenbo Li', 'Tao Dai', 'Shu-Tao Xia', 'Yawei Li'] | ['eess.IV', 'cs.CV', 'cs.LG'] | The Mamba-based image restoration backbones have recently demonstrated
significant potential in balancing global reception and computational
efficiency. However, the inherent causal modeling limitation of Mamba, where
each token depends solely on its predecessors in the scanned sequence,
restricts the full utilization ... | 2024-11-22T12:45:12Z | Accepted by CVPR2025 | null | null | null | null | null | null | null | null | null |
2,411.15296 | MME-Survey: A Comprehensive Survey on Evaluation of Multimodal LLMs | ['Chaoyou Fu', 'Yi-Fan Zhang', 'Shukang Yin', 'Bo Li', 'Xinyu Fang', 'Sirui Zhao', 'Haodong Duan', 'Xing Sun', 'Ziwei Liu', 'Liang Wang', 'Caifeng Shan', 'Ran He'] | ['cs.CV', 'cs.AI', 'cs.CL'] | As a prominent direction of Artificial General Intelligence (AGI), Multimodal
Large Language Models (MLLMs) have garnered increased attention from both
industry and academia. Building upon pre-trained LLMs, this family of models
further develops multimodal perception and reasoning capabilities that are
impressive, such... | 2024-11-22T18:59:54Z | Produced by MME+MMBench+LLaVA Teams. Project Page:
https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models/tree/Benchmarks | null | null | null | null | null | null | null | null | null |
2,411.15397 | Efficient Online Inference of Vision Transformers by Training-Free
Tokenization | ['Leonidas Gee', 'Wing Yan Li', 'Viktoriia Sharmanska', 'Novi Quadrianto'] | ['cs.CV'] | The cost of deploying vision transformers increasingly represents a barrier
to wider industrial adoption. Existing compression techniques require
additional end-to-end fine-tuning or incur a significant drawback to runtime,
making them ill-suited for online (real-time) inference, where a prediction is
made on any new i... | 2024-11-23T00:47:13Z | null | null | null | null | null | null | null | null | null | null |
2,411.15411 | FINECAPTION: Compositional Image Captioning Focusing on Wherever You
Want at Any Granularity | ['Hang Hua', 'Qing Liu', 'Lingzhi Zhang', 'Jing Shi', 'Zhifei Zhang', 'Yilin Wang', 'Jianming Zhang', 'Jiebo Luo'] | ['cs.CV'] | The advent of large Vision-Language Models (VLMs) has significantly advanced
multimodal tasks, enabling more sophisticated and accurate reasoning across
various applications, including image and video captioning, visual question
answering, and cross-modal retrieval. Despite their superior capabilities, VLMs
struggle wi... | 2024-11-23T02:20:32Z | Preprint | null | null | null | null | null | null | null | null | null |
2,411.15497 | AeroGen: Enhancing Remote Sensing Object Detection with Diffusion-Driven
Data Generation | ['Datao Tang', 'Xiangyong Cao', 'Xuan Wu', 'Jialin Li', 'Jing Yao', 'Xueru Bai', 'Dongsheng Jiang', 'Yin Li', 'Deyu Meng'] | ['cs.CV'] | Remote sensing image object detection (RSIOD) aims to identify and locate
specific objects within satellite or aerial imagery. However, there is a
scarcity of labeled data in current RSIOD datasets, which significantly limits
the performance of current detection algorithms. Although existing techniques,
e.g., data augm... | 2024-11-23T09:04:33Z | null | null | null | AeroGen: Enhancing Remote Sensing Object Detection with Diffusion-Driven Data Generation | ['Datao Tang', 'Xiangyong Cao', 'Xuan Wu', 'Jialin Li', 'Jing Yao', 'Xueru Bai', 'Deyu Meng'] | 2,024 | Computer Vision and Pattern Recognition | 9 | 47 | ['Computer Science'] |
2,411.15523 | Enhancing Grammatical Error Detection using BERT with Cleaned Lang-8
Dataset | ['Rahul Nihalani', 'Kushal Shah'] | ['cs.CL', 'cs.AI'] | This paper presents an improved LLM based model for Grammatical Error
Detection (GED), which is a very challenging and equally important problem for
many applications. The traditional approach to GED involved hand-designed
features, but recently, Neural Networks (NN) have automated the discovery of
these features, impr... | 2024-11-23T10:57:41Z | 10 pages, 6 tables, 20 references | null | null | null | null | null | null | null | null | null |
2,411.15558 | Reassessing Layer Pruning in LLMs: New Insights and Methods | ['Yao Lu', 'Hao Cheng', 'Yujie Fang', 'Zeyu Wang', 'Jiaheng Wei', 'Dongwei Xu', 'Qi Xuan', 'Xiaoniu Yang', 'Zhaowei Zhu'] | ['cs.LG', 'cs.CV'] | Although large language models (LLMs) have achieved remarkable success across
various domains, their considerable scale necessitates substantial
computational resources, posing significant challenges for deployment in
resource-constrained environments. Layer pruning, as a simple yet effective
compression method, remove... | 2024-11-23T13:31:16Z | null | null | null | Reassessing Layer Pruning in LLMs: New Insights and Methods | ['Yao Lu', 'Hao Cheng', 'Yujie Fang', 'Zeyu Wang', 'Jiaheng Wei', 'Dongwei Xu', 'Qi Xuan', 'Xiaoniu Yang', 'Zhaowei Zhu'] | 2,024 | arXiv.org | 4 | 75 | ['Computer Science'] |
2,411.1564 | AfriMed-QA: A Pan-African, Multi-Specialty, Medical Question-Answering
Benchmark Dataset | ['Tobi Olatunji', 'Charles Nimo', 'Abraham Owodunni', 'Tassallah Abdullahi', 'Emmanuel Ayodele', 'Mardhiyah Sanni', 'Chinemelu Aka', 'Folafunmi Omofoye', 'Foutse Yuehgoh', 'Timothy Faniran', 'Bonaventure F. P. Dossou', 'Moshood Yekini', 'Jonas Kemp', 'Katherine Heller', 'Jude Chidubem Omeke', 'Chidi Asuzu MD', 'Naome A... | ['cs.CL'] | Recent advancements in large language model(LLM) performance on medical
multiple choice question (MCQ) benchmarks have stimulated interest from
healthcare providers and patients globally. Particularly in low-and
middle-income countries (LMICs) facing acute physician shortages and lack of
specialists, LLMs offer a poten... | 2024-11-23T19:43:02Z | null | null | null | AfriMed-QA: A Pan-African, Multi-Specialty, Medical Question-Answering Benchmark Dataset | ['Tobi Olatunji', 'Charles Nimo', 'Abraham Owodunni', 'Tassallah Abdullahi', 'Emmanuel Ayodele', 'Mardhiyah Sanni', 'Chinemelu Aka', 'Folafunmi Omofoye', 'Foutse Yuehgoh', 'Timothy Faniran', 'Bonaventure F. P. Dossou', 'Moshood Yekini', 'Jonas Kemp', 'Katherine Heller', 'Jude Chidubem Omeke', 'Chidi Asuzu', 'Naome A. E... | 2,024 | arXiv.org | 3 | 38 | ['Computer Science'] |
2,411.15708 | LLaMA-MoE v2: Exploring Sparsity of LLaMA from Perspective of
Mixture-of-Experts with Post-Training | ['Xiaoye Qu', 'Daize Dong', 'Xuyang Hu', 'Tong Zhu', 'Weigao Sun', 'Yu Cheng'] | ['cs.CL'] | Recently, inspired by the concept of sparsity, Mixture-of-Experts (MoE)
models have gained increasing popularity for scaling model size while keeping
the number of activated parameters constant. In this study, we thoroughly
investigate the sparsity of the dense LLaMA model by constructing MoE for both
the attention (i.... | 2024-11-24T04:26:04Z | Technical report,13 pages | null | null | LLaMA-MoE v2: Exploring Sparsity of LLaMA from Perspective of Mixture-of-Experts with Post-Training | ['Xiaoye Qu', 'Daize Dong', 'Xuyang Hu', 'Tong Zhu', 'Weigao Sun', 'Yu Cheng'] | 2,024 | arXiv.org | 13 | 37 | ['Computer Science'] |
2,411.15734 | Development of Pre-Trained Transformer-based Models for the Nepali
Language | ['Prajwal Thapa', 'Jinu Nyachhyon', 'Mridul Sharma', 'Bal Krishna Bal'] | ['cs.CL', 'cs.LG'] | Transformer-based pre-trained language models have dominated the field of
Natural Language Processing (NLP) for quite some time now. However, the Nepali
language, spoken by approximately 32 million people worldwide, remains
significantly underrepresented in this domain. This underrepresentation is
primarily attributed ... | 2024-11-24T06:38:24Z | null | null | null | Development of Pre-Trained Transformer-based Models for the Nepali Language | ['Prajwal Thapa', 'Jinu Nyachhyon', 'Mridul Sharma', 'B. Bal'] | 2,024 | arXiv.org | 1 | 36 | ['Computer Science'] |
2,411.15738 | AnyEdit: Mastering Unified High-Quality Image Editing for Any Idea | ['Qifan Yu', 'Wei Chow', 'Zhongqi Yue', 'Kaihang Pan', 'Yang Wu', 'Xiaoyang Wan', 'Juncheng Li', 'Siliang Tang', 'Hanwang Zhang', 'Yueting Zhuang'] | ['cs.CV'] | Instruction-based image editing aims to modify specific image elements with
natural language instructions. However, current models in this domain often
struggle to accurately execute complex user instructions, as they are trained
on low-quality data with limited editing types. We present AnyEdit, a
comprehensive multi-... | 2024-11-24T07:02:56Z | Accepted by CVPR 2025 | null | null | null | null | null | null | null | null | null |
2,411.15941 | MobileMamba: Lightweight Multi-Receptive Visual Mamba Network | ['Haoyang He', 'Jiangning Zhang', 'Yuxuan Cai', 'Hongxu Chen', 'Xiaobin Hu', 'Zhenye Gan', 'Yabiao Wang', 'Chengjie Wang', 'Yunsheng Wu', 'Lei Xie'] | ['cs.CV'] | Previous research on lightweight models has primarily focused on CNNs and
Transformer-based designs. CNNs, with their local receptive fields, struggle to
capture long-range dependencies, while Transformers, despite their global
modeling capabilities, are limited by quadratic computational complexity in
high-resolution ... | 2024-11-24T18:01:05Z | 14 pages | null | null | null | null | null | null | null | null | null |
2,411.16085 | Cautious Optimizers: Improving Training with One Line of Code | ['Kaizhao Liang', 'Lizhang Chen', 'Bo Liu', 'Qiang Liu'] | ['cs.LG', 'cs.AI', 'cs.CL', 'cs.CV', 'cs.DM'] | AdamW has been the default optimizer for transformer pretraining. For many
years, our community searched for faster and more stable optimizers with only
constrained positive outcomes. In this work, we propose a single-line
modification in Pytorch to any momentum-based optimizer, which we rename
cautious optimizer, e.g.... | 2024-11-25T04:36:01Z | null | null | null | null | null | null | null | null | null | null |
2,411.16106 | UNOPose: Unseen Object Pose Estimation with an Unposed RGB-D Reference
Image | ['Xingyu Liu', 'Gu Wang', 'Ruida Zhang', 'Chenyangguang Zhang', 'Federico Tombari', 'Xiangyang Ji'] | ['cs.CV'] | Unseen object pose estimation methods often rely on CAD models or multiple
reference views, making the onboarding stage costly. To simplify reference
acquisition, we aim to estimate the unseen object's pose through a single
unposed RGB-D reference image. While previous works leverage reference images
as pose anchors to... | 2024-11-25T05:36:00Z | Accepted by CVPR'25 | null | null | null | null | null | null | null | null | null |
2,411.16199 | VIRES: Video Instance Repainting via Sketch and Text Guided Generation | ['Shuchen Weng', 'Haojie Zheng', 'Peixuan Zhang', 'Yuchen Hong', 'Han Jiang', 'Si Li', 'Boxin Shi'] | ['cs.CV'] | We introduce VIRES, a video instance repainting method with sketch and text
guidance, enabling video instance repainting, replacement, generation, and
removal. Existing approaches struggle with temporal consistency and accurate
alignment with the provided sketch sequence. VIRES leverages the generative
priors of text-t... | 2024-11-25T08:55:41Z | null | null | null | VIRES: Video Instance Repainting via Sketch and Text Guided Generation | ['Shuchen Weng', 'Haojie Zheng', 'Peixuan Zhang', 'Yuchen Hong', 'Han Jiang', 'Si Li', 'Boxin Shi'] | 2,024 | null | 0 | 52 | ['Computer Science'] |
2,411.16239 | CS-Eval: A Comprehensive Large Language Model Benchmark for
CyberSecurity | ['Zhengmin Yu', 'Jiutian Zeng', 'Siyi Chen', 'Wenhan Xu', 'Dandan Xu', 'Xiangyu Liu', 'Zonghao Ying', 'Nan Wang', 'Yuan Zhang', 'Min Yang'] | ['cs.CR'] | Over the past year, there has been a notable rise in the use of large
language models (LLMs) for academic research and industrial practices within
the cybersecurity field. However, it remains a lack of comprehensive and
publicly accessible benchmarks to evaluate the performance of LLMs on
cybersecurity tasks. To addres... | 2024-11-25T09:54:42Z | null | null | null | null | null | null | null | null | null | null |
2,411.16331 | Sonic: Shifting Focus to Global Audio Perception in Portrait Animation | ['Xiaozhong Ji', 'Xiaobin Hu', 'Zhihong Xu', 'Junwei Zhu', 'Chuming Lin', 'Qingdong He', 'Jiangning Zhang', 'Donghao Luo', 'Yi Chen', 'Qin Lin', 'Qinglin Lu', 'Chengjie Wang'] | ['cs.MM', 'cs.CV', 'cs.GR', 'cs.SD', 'eess.AS'] | The study of talking face generation mainly explores the intricacies of
synchronizing facial movements and crafting visually appealing,
temporally-coherent animations. However, due to the limited exploration of
global audio perception, current approaches predominantly employ auxiliary
visual and spatial knowledge to st... | 2024-11-25T12:24:52Z | refer to our main-page \url{https://jixiaozhong.github.io/Sonic/} | null | null | null | null | null | null | null | null | null |
2,411.16341 | From CISC to RISC: language-model guided assembly transpilation | ['Ahmed Heakl', 'Chaimaa Abi', 'Rania Hossam', 'Abdulrahman Mahmoud'] | ['cs.PL', 'cs.AR'] | The transition from x86 to ARM architecture is becoming increasingly common
across various domains, primarily driven by ARM's energy efficiency and
improved performance across traditional sectors. However, this ISA shift poses
significant challenges, mainly due to the extensive legacy ecosystem of x86
software and lack... | 2024-11-25T12:37:07Z | null | null | null | null | null | null | null | null | null | null |
2,411.16365 | Multi-modal Retrieval Augmented Multi-modal Generation: Datasets,
Evaluation Metrics and Strong Baselines | ['Zi-Ao Ma', 'Tian Lan', 'Rong-Cheng Tu', 'Yong Hu', 'Yu-Shi Zhu', 'Tong Zhang', 'Heyan Huang', 'Zhijing Wu', 'Xian-Ling Mao'] | ['cs.CL'] | We present a systematic investigation of Multi-modal Retrieval Augmented
Multi-modal Generation (M$^2$RAG), a novel task that enables foundation models
to process multi-modal web content and generate multi-modal responses, which
exhibits better information density and readability. Despite its potential
impact, M$^2$RAG... | 2024-11-25T13:20:19Z | null | null | null | null | null | null | null | null | null | null |
2,411.16662 | A Supervised Machine Learning Approach for Assessing Grant Peer Review
Reports | ['Gabriel Okasa', 'Alberto de León', 'Michaela Strinzel', 'Anne Jorstad', 'Katrin Milzow', 'Matthias Egger', 'Stefan Müller'] | ['econ.EM'] | Peer review in grant evaluation informs funding decisions, but the contents
of peer review reports are rarely analyzed. In this work, we develop a
thoroughly tested pipeline to analyze the texts of grant peer review reports
using methods from applied Natural Language Processing (NLP) and machine
learning. We start by d... | 2024-11-25T18:46:34Z | added results and references | null | null | A Supervised Machine Learning Approach for Assessing Grant Peer Review Reports | ['Gabriel Okasa', "Alberto de Le'on", 'Michaela Strinzel', 'Anne Jorstad', 'Katrin Milzow', 'Matthias Egger', 'Stefan Muller'] | 2,024 | null | 0 | 0 | ['Economics'] |
2,411.16828 | CLIPS: An Enhanced CLIP Framework for Learning with Synthetic Captions | ['Yanqing Liu', 'Xianhang Li', 'Zeyu Wang', 'Bingchen Zhao', 'Cihang Xie'] | ['cs.CV'] | Previous works show that noisy, web-crawled image-text pairs may limit
vision-language pretraining like CLIP and propose learning with synthetic
captions as a promising alternative. Our work continues this effort,
introducing two simple yet effective designs to better leverage richly
described synthetic captions. First... | 2024-11-25T18:49:02Z | 12 pages | null | null | null | null | null | null | null | null | null |
2,411.16863 | Augmenting Multimodal LLMs with Self-Reflective Tokens for
Knowledge-based Visual Question Answering | ['Federico Cocchi', 'Nicholas Moratelli', 'Marcella Cornia', 'Lorenzo Baraldi', 'Rita Cucchiara'] | ['cs.CV', 'cs.AI', 'cs.CL', 'cs.MM'] | Multimodal LLMs (MLLMs) are the natural extension of large language models to
handle multimodal inputs, combining text and image data. They have recently
garnered attention due to their capability to address complex tasks involving
both modalities. However, their effectiveness is limited to the knowledge
acquired durin... | 2024-11-25T19:01:03Z | CVPR 2025 | null | null | null | null | null | null | null | null | null |
2,411.17 | SatVision-TOA: A Geospatial Foundation Model for Coarse-Resolution
All-Sky Remote Sensing Imagery | ['Caleb S. Spradlin', 'Jordan A. Caraballo-Vega', 'Jian Li', 'Mark L. Carroll', 'Jie Gong', 'Paul M. Montesano'] | ['cs.CV', 'cs.AI', 'cs.LG'] | Foundation models have the potential to transform the landscape of remote
sensing (RS) data analysis by enabling large computer vision models to be
pre-trained on vast amounts of remote sensing data. These models can then be
fine-tuned with small amounts of labeled training and applied to a variety of
applications. Mos... | 2024-11-26T00:08:00Z | 19 pages, 5 figures | null | null | SatVision-TOA: A Geospatial Foundation Model for Coarse-Resolution All-Sky Remote Sensing Imagery | ['C. Spradlin', 'J. A. Caraballo-Vega', 'Jian Li', 'Mark L. Carroll', 'Jie Gong', 'P. Montesano'] | 2,024 | arXiv.org | 1 | 0 | ['Computer Science'] |
2,411.17176 | ChatGen: Automatic Text-to-Image Generation From FreeStyle Chatting | ['Chengyou Jia', 'Changliang Xia', 'Zhuohang Dang', 'Weijia Wu', 'Hangwei Qian', 'Minnan Luo'] | ['cs.CV', 'cs.AI'] | Despite the significant advancements in text-to-image (T2I) generative
models, users often face a trial-and-error challenge in practical scenarios.
This challenge arises from the complexity and uncertainty of tedious steps such
as crafting suitable prompts, selecting appropriate models, and configuring
specific argumen... | 2024-11-26T07:31:12Z | null | null | null | ChatGen: Automatic Text-to-Image Generation From FreeStyle Chatting | ['Chengyou Jia', 'Changliang Xia', 'Zhuohang Dang', 'Weijia Wu', 'Hangwei Qian', 'Minnan Luo'] | 2,024 | arXiv.org | 2 | 0 | ['Computer Science'] |
2,411.1719 | SelfSplat: Pose-Free and 3D Prior-Free Generalizable 3D Gaussian
Splatting | ['Gyeongjin Kang', 'Jisang Yoo', 'Jihyeon Park', 'Seungtae Nam', 'Hyeonsoo Im', 'Sangheon Shin', 'Sangpil Kim', 'Eunbyung Park'] | ['cs.CV'] | We propose SelfSplat, a novel 3D Gaussian Splatting model designed to perform
pose-free and 3D prior-free generalizable 3D reconstruction from unposed
multi-view images. These settings are inherently ill-posed due to the lack of
ground-truth data, learned geometric information, and the need to achieve
accurate 3D recon... | 2024-11-26T08:01:50Z | Project page: https://gynjn.github.io/selfsplat/ | null | null | SelfSplat: Pose-Free and 3D Prior-Free Generalizable 3D Gaussian Splatting | ['Gyeongjin Kang', 'Jisang Yoo', 'Jihyeon Park', 'Seungtae Nam', 'Hyeonsoo Im', 'Sangheon Shin', 'Sangpil Kim', 'Eunbyung Park'] | 2,024 | arXiv.org | 6 | 73 | ['Computer Science'] |
2,411.17196 | P2DFlow: A Protein Ensemble Generative Model with SE(3) Flow Matching | ['Yaowei Jin', 'Qi Huang', 'Ziyang Song', 'Mingyue Zheng', 'Dan Teng', 'Qian Shi'] | ['physics.bio-ph', 'cs.LG'] | Biological processes, functions, and properties are intricately linked to the
ensemble of protein conformations, rather than being solely determined by a
single stable conformation. In this study, we have developed P2DFlow, a
generative model based on SE(3) flow matching, to predict the structural
ensembles of proteins... | 2024-11-26T08:10:12Z | null | null | null | P2DFlow: A Protein Ensemble Generative Model with SE(3) Flow Matching | ['Yaowei Jin', 'Qi Huang', 'Ziyang Song', 'Mingyue Zheng', 'Dan Teng', 'Qian Shi'] | 2,024 | Journal of Chemical Theory and Computation | 3 | 15 | ['Computer Science', 'Medicine', 'Physics'] |
2,411.17203 | cWDM: Conditional Wavelet Diffusion Models for Cross-Modality 3D Medical
Image Synthesis | ['Paul Friedrich', 'Alicia Durrer', 'Julia Wolleb', 'Philippe C. Cattin'] | ['eess.IV', 'cs.CV'] | This paper contributes to the "BraTS 2024 Brain MR Image Synthesis Challenge"
and presents a conditional Wavelet Diffusion Model (cWDM) for directly solving
a paired image-to-image translation task on high-resolution volumes. While deep
learning-based brain tumor segmentation models have demonstrated clear clinical
uti... | 2024-11-26T08:17:57Z | BraTS 2024 (Global Synthesis) submission. Code:
https://github.com/pfriedri/cwdm | null | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.