arxiv_id float64 1.5k 2.51k | title stringlengths 9 178 ⌀ | authors stringlengths 2 22.8k | categories stringlengths 4 146 | summary stringlengths 103 1.92k ⌀ | published stringdate 2015-02-06 10:44:00 2025-07-10 17:59:58 ⌀ | comments stringlengths 2 417 ⌀ | journal_ref stringclasses 321
values | doi stringclasses 398
values | ss_title stringlengths 8 159 ⌀ | ss_authors stringlengths 11 8.38k ⌀ | ss_year float64 2.02k 2.03k ⌀ | ss_venue stringclasses 281
values | ss_citationCount float64 0 134k ⌀ | ss_referenceCount float64 0 429 ⌀ | ss_fieldsOfStudy stringclasses 47
values |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2,409.02834 | CMM-Math: A Chinese Multimodal Math Dataset To Evaluate and Enhance the
Mathematics Reasoning of Large Multimodal Models | ['Wentao Liu', 'Qianjun Pan', 'Yi Zhang', 'Zhuo Liu', 'Ji Wu', 'Jie Zhou', 'Aimin Zhou', 'Qin Chen', 'Bo Jiang', 'Liang He'] | ['cs.CL'] | Large language models (LLMs) have obtained promising results in mathematical
reasoning, which is a foundational skill for human intelligence. Most previous
studies focus on improving and measuring the performance of LLMs based on
textual math reasoning datasets (e.g., MATH, GSM8K). Recently, a few
researchers have rele... | 2024-09-04T16:00:21Z | null | null | null | CMM-Math: A Chinese Multimodal Math Dataset To Evaluate and Enhance the Mathematics Reasoning of Large Multimodal Models | ['Wentao Liu', 'Qianjun Pan', 'Yi Zhang', 'Zhuo Liu', 'Ji Wu', 'Jie Zhou', 'Aimin Zhou', 'Qin Chen', 'Bo Jiang', 'Liang He'] | 2,024 | arXiv.org | 7 | 42 | ['Computer Science'] |
2,409.02889 | LongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via a
Hybrid Architecture | ['Xidong Wang', 'Dingjie Song', 'Shunian Chen', 'Chen Zhang', 'Benyou Wang'] | ['cs.CL', 'cs.AI', 'cs.CV', 'cs.MM'] | Expanding the long-context capabilities of Multi-modal Large Language
Models~(MLLMs) is crucial for video understanding, high-resolution image
understanding, and multi-modal agents. This involves a series of systematic
optimizations, including model architecture, data construction and training
strategy, particularly ad... | 2024-09-04T17:25:21Z | 20 pages, 9 figures, 9 tables | null | null | LongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via Hybrid Architecture | ['Xidong Wang', 'Dingjie Song', 'Shunian Chen', 'Chen Zhang', 'Benyou Wang'] | 2,024 | arXiv.org | 52 | 74 | ['Computer Science'] |
2,409.02897 | LongCite: Enabling LLMs to Generate Fine-grained Citations in
Long-context QA | ['Jiajie Zhang', 'Yushi Bai', 'Xin Lv', 'Wanjun Gu', 'Danqing Liu', 'Minhao Zou', 'Shulin Cao', 'Lei Hou', 'Yuxiao Dong', 'Ling Feng', 'Juanzi Li'] | ['cs.CL'] | Though current long-context large language models (LLMs) have demonstrated
impressive capacities in answering user questions based on extensive text, the
lack of citations in their responses makes user verification difficult, leading
to concerns about their trustworthiness due to their potential hallucinations.
In this... | 2024-09-04T17:41:19Z | null | null | null | null | null | null | null | null | null | null |
2,409.02908 | Masked Diffusion Models are Secretly Time-Agnostic Masked Models and
Exploit Inaccurate Categorical Sampling | ['Kaiwen Zheng', 'Yongxin Chen', 'Hanzi Mao', 'Ming-Yu Liu', 'Jun Zhu', 'Qinsheng Zhang'] | ['cs.LG', 'cs.AI', 'cs.CL'] | Masked diffusion models (MDMs) have emerged as a popular research topic for
generative modeling of discrete data, thanks to their superior performance over
other discrete diffusion models, and are rivaling the auto-regressive models
(ARMs) for language modeling tasks. The recent effort in simplifying the masked
diffusi... | 2024-09-04T17:48:19Z | Accepted at ICLR 2025 | null | null | null | null | null | null | null | null | null |
2,409.0292 | RoboTwin: Dual-Arm Robot Benchmark with Generative Digital Twins (early
version) | ['Yao Mu', 'Tianxing Chen', 'Shijia Peng', 'Zanxin Chen', 'Zeyu Gao', 'Yude Zou', 'Lunkai Lin', 'Zhiqiang Xie', 'Ping Luo'] | ['cs.RO', 'cs.AI', 'cs.CL'] | In the rapidly advancing field of robotics, dual-arm coordination and complex
object manipulation are essential capabilities for developing advanced
autonomous systems. However, the scarcity of diverse, high-quality
demonstration data and real-world-aligned evaluation benchmarks severely limits
such development. To add... | 2024-09-04T17:59:52Z | Project page: https://robotwin-benchmark.github.io/early-version/ | null | null | null | null | null | null | null | null | null |
2,409.02979 | Vec2Face: Scaling Face Dataset Generation with Loosely Constrained
Vectors | ['Haiyu Wu', 'Jaskirat Singh', 'Sicong Tian', 'Liang Zheng', 'Kevin W. Bowyer'] | ['cs.CV'] | This paper studies how to synthesize face images of non-existent persons, to
create a dataset that allows effective training of face recognition (FR)
models. Besides generating realistic face images, two other important goals
are: 1) the ability to generate a large number of distinct identities
(inter-class separation)... | 2024-09-04T17:59:51Z | Accepted at ICLR2025 | null | null | Vec2Face: Scaling Face Dataset Generation with Loosely Constrained Vectors | ['Haiyu Wu', 'Jaskirat Singh', 'Sicong Tian', 'Liang Zheng', 'Kevin W. Bowyer'] | 2,024 | International Conference on Learning Representations | 4 | 57 | ['Computer Science'] |
2,409.03025 | No Detail Left Behind: Revisiting Self-Retrieval for Fine-Grained Image
Captioning | ['Manu Gaur', 'Darshan Singh', 'Makarand Tapaswi'] | ['cs.CV'] | Image captioning systems are unable to generate fine-grained captions as they
are trained on data that is either noisy (alt-text) or generic (human
annotations). This is further exacerbated by maximum likelihood training that
encourages generation of frequently occurring phrases. Previous works have
tried to address th... | 2024-09-04T18:32:39Z | Published at Transactions on Machine Learning Research (TMLR)
https://openreview.net/forum?id=gqh0yzPYdo | null | null | null | null | null | null | null | null | null |
2,409.0308 | Explainable AI for computational pathology identifies model limitations
and tissue biomarkers | ['Jakub R. Kaczmarzyk', 'Joel H. Saltz', 'Peter K. Koo'] | ['q-bio.TO'] | Introduction: Deep learning models hold great promise for digital pathology,
but their opaque decision-making processes undermine trust and hinder clinical
adoption. Explainable AI methods are essential to enhance model transparency
and reliability. Methods: We developed HIPPO, an explainable AI framework that
systemat... | 2024-09-04T21:08:55Z | null | null | null | null | null | null | null | null | null | null |
2,409.03137 | The AdEMAMix Optimizer: Better, Faster, Older | ['Matteo Pagliardini', 'Pierre Ablin', 'David Grangier'] | ['cs.LG', 'stat.ML'] | Momentum based optimizers are central to a wide range of machine learning
applications. These typically rely on an Exponential Moving Average (EMA) of
gradients, which decays exponentially the present contribution of older
gradients. This accounts for gradients being local linear approximations which
lose their relevan... | 2024-09-05T00:13:16Z | 38 pages, 33 figures | null | null | The AdEMAMix Optimizer: Better, Faster, Older | ['Matteo Pagliardini', 'Pierre Ablin', 'David Grangier'] | 2,024 | International Conference on Learning Representations | 13 | 71 | ['Computer Science', 'Mathematics'] |
2,409.03215 | xLAM: A Family of Large Action Models to Empower AI Agent Systems | ['Jianguo Zhang', 'Tian Lan', 'Ming Zhu', 'Zuxin Liu', 'Thai Hoang', 'Shirley Kokane', 'Weiran Yao', 'Juntao Tan', 'Akshara Prabhakar', 'Haolin Chen', 'Zhiwei Liu', 'Yihao Feng', 'Tulika Awalgaonkar', 'Rithesh Murthy', 'Eric Hu', 'Zeyuan Chen', 'Ran Xu', 'Juan Carlos Niebles', 'Shelby Heinecke', 'Huan Wang', 'Silvio Sa... | ['cs.CL', 'cs.AI', 'cs.LG'] | Autonomous agents powered by large language models (LLMs) have attracted
significant research interest. However, the open-source community faces many
challenges in developing specialized models for agent tasks, driven by the
scarcity of high-quality agent datasets and the absence of standard protocols
in this area. We ... | 2024-09-05T03:22:22Z | Technical report for the Salesforce xLAM model series | null | null | null | null | null | null | null | null | null |
2,409.03277 | ChartMoE: Mixture of Diversely Aligned Expert Connector for Chart
Understanding | ['Zhengzhuo Xu', 'Bowen Qu', 'Yiyan Qi', 'Sinan Du', 'Chengjin Xu', 'Chun Yuan', 'Jian Guo'] | ['cs.AI', 'cs.CL', 'cs.CV'] | Automatic chart understanding is crucial for content comprehension and
document parsing. Multimodal Large Language Models (MLLMs) have demonstrated
remarkable capabilities in chart understanding through domain-specific
alignment and fine-tuning. However, current MLLMs still struggle to provide
faithful data and reliabl... | 2024-09-05T06:41:02Z | null | null | null | null | null | null | null | null | null | null |
2,409.03444 | Fine-tuning large language models for domain adaptation: Exploration of
training strategies, scaling, model merging and synergistic capabilities | ['Wei Lu', 'Rachel K. Luu', 'Markus J. Buehler'] | ['cs.CL', 'cond-mat.mtrl-sci', 'cs.AI'] | The advancement of Large Language Models (LLMs) for domain applications in
fields such as materials science and engineering depends on the development of
fine-tuning strategies that adapt models for specialized, technical
capabilities. In this work, we explore the effects of Continued Pretraining
(CPT), Supervised Fine... | 2024-09-05T11:49:53Z | null | null | null | null | null | null | null | null | null | null |
2,409.03753 | WildVis: Open Source Visualizer for Million-Scale Chat Logs in the Wild | ['Yuntian Deng', 'Wenting Zhao', 'Jack Hessel', 'Xiang Ren', 'Claire Cardie', 'Yejin Choi'] | ['cs.CL', 'cs.AI', 'cs.HC', 'cs.IR', 'cs.LG'] | The increasing availability of real-world conversation data offers exciting
opportunities for researchers to study user-chatbot interactions. However, the
sheer volume of this data makes manually examining individual conversations
impractical. To overcome this challenge, we introduce WildVis, an interactive
tool that e... | 2024-09-05T17:59:15Z | null | null | null | null | null | null | null | null | null | null |
2,409.04005 | Qihoo-T2X: An Efficient Proxy-Tokenized Diffusion Transformer for
Text-to-Any-Task | ['Jing Wang', 'Ao Ma', 'Jiasong Feng', 'Dawei Leng', 'Yuhui Yin', 'Xiaodan Liang'] | ['cs.CV'] | The global self-attention mechanism in diffusion transformers involves
redundant computation due to the sparse and redundant nature of visual
information, and the attention map of tokens within a spatial window shows
significant similarity. To address this redundancy, we propose the
Proxy-Tokenized Diffusion Transforme... | 2024-09-06T03:13:45Z | null | null | null | Qihoo-T2X: An Efficient Proxy-Tokenized Diffusion Transformer for Text-to-Any-Task | ['Jing Wang', 'Ao Ma', 'Jiasong Feng', 'Dawei Leng', 'Yuhui Yin', 'Xiaodan Liang'] | 2,024 | null | 4 | 0 | ['Computer Science'] |
2,409.04109 | Can LLMs Generate Novel Research Ideas? A Large-Scale Human Study with
100+ NLP Researchers | ['Chenglei Si', 'Diyi Yang', 'Tatsunori Hashimoto'] | ['cs.CL', 'cs.AI', 'cs.CY', 'cs.HC', 'cs.LG'] | Recent advancements in large language models (LLMs) have sparked optimism
about their potential to accelerate scientific discovery, with a growing number
of works proposing research agents that autonomously generate and validate new
ideas. Despite this, no evaluations have shown that LLM systems can take the
very first... | 2024-09-06T08:25:03Z | main paper is 20 pages | null | null | Can LLMs Generate Novel Research Ideas? A Large-Scale Human Study with 100+ NLP Researchers | ['Chenglei Si', 'Diyi Yang', 'Tatsunori Hashimoto'] | 2,024 | arXiv.org | 157 | 0 | ['Computer Science'] |
2,409.04185 | Residual Stream Analysis with Multi-Layer SAEs | ['Tim Lawson', 'Lucy Farnik', 'Conor Houghton', 'Laurence Aitchison'] | ['cs.LG', 'cs.CL'] | Sparse autoencoders (SAEs) are a promising approach to interpreting the
internal representations of transformer language models. However, SAEs are
usually trained separately on each transformer layer, making it difficult to
use them to study how information flows across layers. To solve this problem,
we introduce the m... | 2024-09-06T11:01:55Z | ICLR 2025 Camera Ready. 45 pages, 41 figures | null | null | null | null | null | null | null | null | null |
2,409.04269 | Open Language Data Initiative: Advancing Low-Resource Machine
Translation for Karakalpak | ['Mukhammadsaid Mamasaidov', 'Abror Shopulatov'] | ['cs.CL'] | This study presents several contributions for the Karakalpak language: a
FLORES+ devtest dataset translated to Karakalpak, parallel corpora for
Uzbek-Karakalpak, Russian-Karakalpak and English-Karakalpak of 100,000 pairs
each and open-sourced fine-tuned neural models for translation across these
languages. Our experime... | 2024-09-06T13:25:18Z | Submitted to WMT 2024 | null | null | Open Language Data Initiative: Advancing Low-Resource Machine Translation for Karakalpak | ['Mukhammadsaid Mamasaidov', 'Abror Shopulatov'] | 2,024 | Conference on Machine Translation | 4 | 16 | ['Computer Science'] |
2,409.0441 | Open-MAGVIT2: An Open-Source Project Toward Democratizing
Auto-regressive Visual Generation | ['Zhuoyan Luo', 'Fengyuan Shi', 'Yixiao Ge', 'Yujiu Yang', 'Limin Wang', 'Ying Shan'] | ['cs.CV', 'cs.AI'] | The Open-MAGVIT2 project produces an open-source replication of Google's
MAGVIT-v2 tokenizer, a tokenizer with a super-large codebook (i.e., $2^{18}$
codes), and achieves the state-of-the-art reconstruction performance on
ImageNet and UCF benchmarks. We also provide a tokenizer pre-trained on
large-scale data, signific... | 2024-09-06T17:14:53Z | null | null | null | Open-MAGVIT2: An Open-Source Project Toward Democratizing Auto-regressive Visual Generation | ['Zhuoyan Luo', 'Fengyuan Shi', 'Yixiao Ge', 'Yujiu Yang', 'Limin Wang', 'Ying Shan'] | 2,024 | arXiv.org | 59 | 53 | ['Computer Science'] |
2,409.04429 | VILA-U: a Unified Foundation Model Integrating Visual Understanding and
Generation | ['Yecheng Wu', 'Zhuoyang Zhang', 'Junyu Chen', 'Haotian Tang', 'Dacheng Li', 'Yunhao Fang', 'Ligeng Zhu', 'Enze Xie', 'Hongxu Yin', 'Li Yi', 'Song Han', 'Yao Lu'] | ['cs.CV', 'cs.LG'] | VILA-U is a Unified foundation model that integrates Video, Image, Language
understanding and generation. Traditional visual language models (VLMs) use
separate modules for understanding and generating visual content, which can
lead to misalignment and increased complexity. In contrast, VILA-U employs a
single autoregr... | 2024-09-06T17:49:56Z | Code: https://github.com/mit-han-lab/vila-u. The first two authors
contributed equally to this work | null | null | null | null | null | null | null | null | null |
2,409.04774 | Untie the Knots: An Efficient Data Augmentation Strategy for
Long-Context Pre-Training in Language Models | ['Junfeng Tian', 'Da Zheng', 'Yang Cheng', 'Rui Wang', 'Colin Zhang', 'Debing Zhang'] | ['cs.CL', 'cs.AI'] | Large language models (LLM) have prioritized expanding the context window
from which models can incorporate more information. However, training models to
handle long contexts presents significant challenges. These include the
scarcity of high-quality natural long-context data, the potential for
performance degradation ... | 2024-09-07T09:28:55Z | null | null | null | Untie the Knots: An Efficient Data Augmentation Strategy for Long-Context Pre-Training in Language Models | ['Junfeng Tian', 'Da Zheng', 'Yang Cheng', 'Rui Wang', 'Colin Zhang', 'Debing Zhang'] | 2,024 | arXiv.org | 5 | 40 | ['Computer Science'] |
2,409.04828 | POINTS: Improving Your Vision-language Model with Affordable Strategies | ['Yuan Liu', 'Zhongyin Zhao', 'Ziyuan Zhuang', 'Le Tian', 'Xiao Zhou', 'Jie Zhou'] | ['cs.CV', 'cs.AI', 'cs.MM'] | In recent years, vision-language models have made significant strides,
excelling in tasks like optical character recognition and geometric
problem-solving. However, several critical issues remain: 1) Proprietary models
often lack transparency about their architectures, while open-source models
need more detailed ablati... | 2024-09-07T13:41:37Z | v2 | null | null | POINTS: Improving Your Vision-language Model with Affordable Strategies | ['Yuan Liu', 'Zhongyin Zhao', 'Ziyuan Zhuang', 'Le Tian', 'Xiao Zhou', 'Jie Zhou'] | 2,024 | arXiv.org | 9 | 87 | ['Computer Science'] |
2,409.05314 | Tele-LLMs: A Series of Specialized Large Language Models for
Telecommunications | ['Ali Maatouk', 'Kenny Chirino Ampudia', 'Rex Ying', 'Leandros Tassiulas'] | ['cs.IT', 'cs.AI', 'cs.LG', 'math.IT'] | The emergence of large language models (LLMs) has significantly impacted
various fields, from natural language processing to sectors like medicine and
finance. However, despite their rapid proliferation, the applications of LLMs
in telecommunications remain limited, often relying on general-purpose models
that lack dom... | 2024-09-09T03:58:51Z | null | null | null | Tele-LLMs: A Series of Specialized Large Language Models for Telecommunications | ['Ali Maatouk', 'Kenny Chirino Ampudia', 'Rex Ying', 'L. Tassiulas'] | 2,024 | arXiv.org | 11 | 42 | ['Computer Science', 'Mathematics'] |
2,409.05356 | IndicVoices-R: Unlocking a Massive Multilingual Multi-speaker Speech
Corpus for Scaling Indian TTS | ['Ashwin Sankar', 'Srija Anand', 'Praveen Srinivasa Varadhan', 'Sherry Thomas', 'Mehak Singal', 'Shridhar Kumar', 'Deovrat Mehendale', 'Aditi Krishana', 'Giri Raju', 'Mitesh Khapra'] | ['cs.CL', 'cs.LG', 'cs.SD', 'eess.SP'] | Recent advancements in text-to-speech (TTS) synthesis show that large-scale
models trained with extensive web data produce highly natural-sounding output.
However, such data is scarce for Indian languages due to the lack of
high-quality, manually subtitled data on platforms like LibriVox or YouTube. To
address this gap... | 2024-09-09T06:28:47Z | Accepted to NeurIPS 2024 Datasets and Benchmarks track | null | null | IndicVoices-R: Unlocking a Massive Multilingual Multi-speaker Speech Corpus for Scaling Indian TTS | ['Ashwin Sankar', 'Srija Anand', 'Praveena Varadhan', 'Sherry Thomas', 'Mehak Singal', 'Shridhar Kumar', 'Deovrat Mehendale', 'Aditi Krishana', 'Giri Raju', 'Mitesh M. Khapra'] | 2,024 | Neural Information Processing Systems | 7 | 48 | ['Computer Science', 'Engineering'] |
2,409.05556 | SciAgents: Automating scientific discovery through multi-agent
intelligent graph reasoning | ['Alireza Ghafarollahi', 'Markus J. Buehler'] | ['cs.AI', 'cond-mat.dis-nn', 'cond-mat.mtrl-sci', 'cs.CL', 'cs.LG'] | A key challenge in artificial intelligence is the creation of systems capable
of autonomously advancing scientific understanding by exploring novel domains,
identifying complex patterns, and uncovering previously unseen connections in
vast scientific data. In this work, we present SciAgents, an approach that
leverages ... | 2024-09-09T12:25:10Z | null | null | null | null | null | null | null | null | null | null |
2,409.05677 | RIRAG: Regulatory Information Retrieval and Answer Generation | ['Tuba Gokhan', 'Kexin Wang', 'Iryna Gurevych', 'Ted Briscoe'] | ['cs.CL', 'cs.AI', 'cs.CE', 'cs.ET', 'cs.IR'] | Regulatory documents, issued by governmental regulatory bodies, establish
rules, guidelines, and standards that organizations must adhere to for legal
compliance. These documents, characterized by their length, complexity and
frequent updates, are challenging to interpret, requiring significant
allocation of time and e... | 2024-09-09T14:44:19Z | null | null | null | null | null | null | null | null | null | null |
2,409.05816 | Improving Pretraining Data Using Perplexity Correlations | ['Tristan Thrush', 'Christopher Potts', 'Tatsunori Hashimoto'] | ['cs.CL', 'cs.LG', 'stat.ML'] | Quality pretraining data is often seen as the key to high-performance
language models. However, progress in understanding pretraining data has been
slow due to the costly pretraining runs required for data selection
experiments. We present a framework that avoids these costs and selects
high-quality pretraining data wi... | 2024-09-09T17:23:29Z | ICLR 2025 | null | null | null | null | null | null | null | null | null |
2,409.0584 | MMEvol: Empowering Multimodal Large Language Models with Evol-Instruct | ['Run Luo', 'Haonan Zhang', 'Longze Chen', 'Ting-En Lin', 'Xiong Liu', 'Yuchuan Wu', 'Min Yang', 'Minzheng Wang', 'Pengpeng Zeng', 'Lianli Gao', 'Heng Tao Shen', 'Yunshui Li', 'Xiaobo Xia', 'Fei Huang', 'Jingkuan Song', 'Yongbin Li'] | ['cs.CL'] | The development of Multimodal Large Language Models (MLLMs) has seen
significant advancements with increasing demands in various fields (e.g.,
multimodal agents, embodied intelligence). While model-driven approaches
attempt to enhance MLLMs capabilities through diverse architectures, the gains
have become increasingly ... | 2024-09-09T17:44:00Z | null | null | null | null | null | null | null | null | null | null |
2,409.05994 | MessIRve: A Large-Scale Spanish Information Retrieval Dataset | ['Francisco Valentini', 'Viviana Cotik', 'Damián Furman', 'Ivan Bercovich', 'Edgar Altszyler', 'Juan Manuel Pérez'] | ['cs.CL', 'cs.AI'] | Information retrieval (IR) is the task of finding relevant documents in
response to a user query. Although Spanish is the second most spoken native
language, current IR benchmarks lack Spanish data, hindering the development of
information access tools for Spanish speakers. We introduce MessIRve, a
large-scale Spanish ... | 2024-09-09T18:45:04Z | null | null | null | MessIRve: A Large-Scale Spanish Information Retrieval Dataset | ['Francisco Valentini', 'Viviana Cotik', 'D. Furman', 'Ivan Bercovich', 'E. Altszyler', "Juan Manuel P'erez"] | 2,024 | arXiv.org | 2 | 29 | ['Computer Science'] |
2,409.06065 | DiffusionPen: Towards Controlling the Style of Handwritten Text
Generation | ['Konstantina Nikolaidou', 'George Retsinas', 'Giorgos Sfikas', 'Marcus Liwicki'] | ['cs.CV'] | Handwritten Text Generation (HTG) conditioned on text and style is a
challenging task due to the variability of inter-user characteristics and the
unlimited combinations of characters that form new words unseen during
training. Diffusion Models have recently shown promising results in HTG but
still remain under-explore... | 2024-09-09T20:58:25Z | null | null | null | DiffusionPen: Towards Controlling the Style of Handwritten Text Generation | ['Konstantina Nikolaidou', 'George Retsinas', 'Giorgos Sfikas', 'M. Liwicki'] | 2,024 | European Conference on Computer Vision | 3 | 45 | ['Computer Science'] |
2,409.06202 | RealisDance: Equip controllable character animation with realistic hands | ['Jingkai Zhou', 'Benzhi Wang', 'Weihua Chen', 'Jingqi Bai', 'Dongyang Li', 'Aixi Zhang', 'Hao Xu', 'Mingyang Yang', 'Fan Wang'] | ['cs.CV'] | Controllable character animation is an emerging task that generates character
videos controlled by pose sequences from given character images. Although
character consistency has made significant progress via reference UNet, another
crucial factor, pose control, has not been well studied by existing methods
yet, resulti... | 2024-09-10T04:14:11Z | Technical Report | null | null | null | null | null | null | null | null | null |
2,409.06595 | GroUSE: A Benchmark to Evaluate Evaluators in Grounded Question
Answering | ['Sacha Muller', 'António Loison', 'Bilel Omrani', 'Gautier Viaud'] | ['cs.CL', 'I.2.7'] | Retrieval-Augmented Generation (RAG) has emerged as a common paradigm to use
Large Language Models (LLMs) alongside private and up-to-date knowledge bases.
In this work, we address the challenges of using LLM-as-a-Judge when evaluating
grounded answers generated by RAG systems. To assess the calibration and
discriminat... | 2024-09-10T15:39:32Z | Proceedings of the 31st International Conference on Computational
Linguistics | Proceedings of the 31st International Conference on Computational
Linguistics (2025), pages 4510 to 4534, Abu Dhabi, UAE. Association for
Computational Linguistics | null | null | null | null | null | null | null | null |
2,409.06635 | MoWE-Audio: Multitask AudioLLMs with Mixture of Weak Encoders | ['Wenyu Zhang', 'Shuo Sun', 'Bin Wang', 'Xunlong Zou', 'Zhuohan Liu', 'Yingxu He', 'Geyu Lin', 'Nancy F. Chen', 'Ai Ti Aw'] | ['cs.SD', 'cs.AI', 'cs.CL', 'eess.AS'] | The rapid advancements in large language models (LLMs) have significantly
enhanced natural language processing capabilities, facilitating the development
of AudioLLMs that process and understand speech and audio inputs alongside
text. Existing AudioLLMs typically combine a pre-trained audio encoder with a
pre-trained L... | 2024-09-10T16:46:18Z | ICASSP 2025 | null | null | MoWE-Audio: Multitask AudioLLMs with Mixture of Weak Encoders | ['Wenyu Zhang', 'Shuo Sun', 'Bin Wang', 'Xunlong Zou', 'Zhuohan Liu', 'Yingxu He', 'Geyu Lin', 'Nancy F. Chen', 'AiTi Aw'] | 2,024 | IEEE International Conference on Acoustics, Speech, and Signal Processing | 1 | 40 | ['Computer Science', 'Engineering'] |
2,409.06656 | Sortformer: Seamless Integration of Speaker Diarization and ASR by
Bridging Timestamps and Tokens | ['Taejin Park', 'Ivan Medennikov', 'Kunal Dhawan', 'Weiqing Wang', 'He Huang', 'Nithin Rao Koluguri', 'Krishna C. Puvvada', 'Jagadeesh Balam', 'Boris Ginsburg'] | ['eess.AS', 'cs.CL', 'cs.LG', 'cs.SD'] | We propose Sortformer, a novel neural model for speaker diarization, trained
with unconventional objectives compared to existing end-to-end diarization
models. The permutation problem in speaker diarization has long been regarded
as a critical challenge. Most prior end-to-end diarization systems employ
permutation inva... | 2024-09-10T17:20:11Z | null | null | null | Sortformer: Seamless Integration of Speaker Diarization and ASR by Bridging Timestamps and Tokens | ['T. Park', 'I. Medennikov', 'Kunal Dhawan', 'Weiqing Wang', 'He Huang', 'N. Koluguri', 'Krishna C. Puvvada', 'Jagadeesh Balam', 'Boris Ginsburg'] | 2,024 | arXiv.org | 5 | 86 | ['Computer Science', 'Engineering'] |
2,409.06666 | LLaMA-Omni: Seamless Speech Interaction with Large Language Models | ['Qingkai Fang', 'Shoutao Guo', 'Yan Zhou', 'Zhengrui Ma', 'Shaolei Zhang', 'Yang Feng'] | ['cs.CL', 'cs.AI', 'cs.SD', 'eess.AS', 'I.2.7'] | Models like GPT-4o enable real-time interaction with large language models
(LLMs) through speech, significantly enhancing user experience compared to
traditional text-based interaction. However, there is still a lack of
exploration on how to build speech interaction models based on open-source
LLMs. To address this, we... | 2024-09-10T17:34:34Z | ICLR 2025 | null | null | LLaMA-Omni: Seamless Speech Interaction with Large Language Models | ['Qingkai Fang', 'Shoutao Guo', 'Yan Zhou', 'Zhengrui Ma', 'Shaolei Zhang', 'Yang Feng'] | 2,024 | International Conference on Learning Representations | 54 | 66 | ['Computer Science', 'Engineering'] |
2,409.07146 | Gated Slot Attention for Efficient Linear-Time Sequence Modeling | ['Yu Zhang', 'Songlin Yang', 'Ruijie Zhu', 'Yue Zhang', 'Leyang Cui', 'Yiqiao Wang', 'Bolun Wang', 'Freda Shi', 'Bailin Wang', 'Wei Bi', 'Peng Zhou', 'Guohong Fu'] | ['cs.CL'] | Linear attention Transformers and their gated variants, celebrated for
enabling parallel training and efficient recurrent inference, still fall short
in recall-intensive tasks compared to traditional Transformers and demand
significant resources for training from scratch. This paper introduces Gated
Slot Attention (GSA... | 2024-09-11T09:49:50Z | NeurIPS 2024 | null | null | Gated Slot Attention for Efficient Linear-Time Sequence Modeling | ['Yu Zhang', 'Songlin Yang', 'Ruijie Zhu', 'Yue Zhang', 'Leyang Cui', 'Yiqiao Wang', 'Bolun Wang', 'Freda Shi', 'Bailin Wang', 'Wei Bi', 'Peng Zhou', 'Guohong Fu'] | 2,024 | Neural Information Processing Systems | 24 | 92 | ['Computer Science'] |
2,409.07431 | Synthetic continued pretraining | ['Zitong Yang', 'Neil Band', 'Shuangping Li', 'Emmanuel Candès', 'Tatsunori Hashimoto'] | ['cs.LG', 'cs.AI', 'cs.CL', 'stat.ML'] | Pretraining on large-scale, unstructured internet text enables language
models to acquire a significant amount of world knowledge. However, this
knowledge acquisition is data-inefficient--to learn a given fact, models must
be trained on hundreds to thousands of diverse representations of it. This
poses a challenge when... | 2024-09-11T17:21:59Z | Updated organization of experimental results and methods
introduction. Released the dataset and model weights artifact | null | null | null | null | null | null | null | null | null |
2,409.07437 | Salmon: A Suite for Acoustic Language Model Evaluation | ['Gallil Maimon', 'Amit Roth', 'Yossi Adi'] | ['cs.SD', 'cs.CL', 'eess.AS'] | Speech language models have recently demonstrated great potential as
universal speech processing systems. Such models have the ability to model the
rich acoustic information existing in audio signals, beyond spoken content,
such as emotion, background noise, etc. Despite this, evaluation benchmarks
which evaluate aware... | 2024-09-11T17:34:52Z | ICASSP 2025, project page -
https://pages.cs.huji.ac.il/adiyoss-lab/salmon/ | null | null | null | null | null | null | null | null | null |
2,409.07447 | StereoCrafter: Diffusion-based Generation of Long and High-fidelity
Stereoscopic 3D from Monocular Videos | ['Sijie Zhao', 'Wenbo Hu', 'Xiaodong Cun', 'Yong Zhang', 'Xiaoyu Li', 'Zhe Kong', 'Xiangjun Gao', 'Muyao Niu', 'Ying Shan'] | ['cs.CV', 'cs.GR', 'I.3.0; I.4.0'] | This paper presents a novel framework for converting 2D videos to immersive
stereoscopic 3D, addressing the growing demand for 3D content in immersive
experience. Leveraging foundation models as priors, our approach overcomes the
limitations of traditional methods and boosts the performance to ensure the
high-fidelity ... | 2024-09-11T17:52:07Z | 11 pages, 10 figures | null | null | null | null | null | null | null | null | null |
2,409.07452 | Hi3D: Pursuing High-Resolution Image-to-3D Generation with Video
Diffusion Models | ['Haibo Yang', 'Yang Chen', 'Yingwei Pan', 'Ting Yao', 'Zhineng Chen', 'Chong-Wah Ngo', 'Tao Mei'] | ['cs.CV', 'cs.MM'] | Despite having tremendous progress in image-to-3D generation, existing
methods still struggle to produce multi-view consistent images with
high-resolution textures in detail, especially in the paradigm of 2D diffusion
that lacks 3D awareness. In this work, we present High-resolution Image-to-3D
model (Hi3D), a new vide... | 2024-09-11T17:58:57Z | ACM Multimedia 2024. Source code is available at
\url{https://github.com/yanghb22-fdu/Hi3D-Official} | null | null | null | null | null | null | null | null | null |
2,409.07556 | SSR-Speech: Towards Stable, Safe and Robust Zero-shot Text-based Speech
Editing and Synthesis | ['Helin Wang', 'Meng Yu', 'Jiarui Hai', 'Chen Chen', 'Yuchen Hu', 'Rilin Chen', 'Najim Dehak', 'Dong Yu'] | ['eess.AS', 'cs.SD'] | In this paper, we introduce SSR-Speech, a neural codec autoregressive model
designed for stable, safe, and robust zero-shot textbased speech editing and
text-to-speech synthesis. SSR-Speech is built on a Transformer decoder and
incorporates classifier-free guidance to enhance the stability of the
generation process. A ... | 2024-09-11T18:24:07Z | ICASSP 2025 | null | null | null | null | null | null | null | null | null |
2,409.07571 | FaVoR: Features via Voxel Rendering for Camera Relocalization | ['Vincenzo Polizzi', 'Marco Cannici', 'Davide Scaramuzza', 'Jonathan Kelly'] | ['cs.CV', 'cs.RO'] | Camera relocalization methods range from dense image alignment to direct
camera pose regression from a query image. Among these, sparse feature matching
stands out as an efficient, versatile, and generally lightweight approach with
numerous applications. However, feature-based methods often struggle with
significant vi... | 2024-09-11T18:58:16Z | In Proceedings of the IEEE/CVF Winter Conference on Applications of
Computer Vision (WACV), Tucson, Arizona, US, Feb 28-Mar 4, 2025 | null | 10.1109/WACV61041.2025.00015 | null | null | null | null | null | null | null |
2,409.07737 | Ruri: Japanese General Text Embeddings | ['Hayato Tsukagoshi', 'Ryohei Sasano'] | ['cs.CL'] | We report the development of Ruri, a series of Japanese general text
embedding models. While the development of general-purpose text embedding
models in English and multilingual contexts has been active in recent years,
model development in Japanese remains insufficient. The primary reasons for
this are the lack of dat... | 2024-09-12T04:06:31Z | null | null | null | Ruri: Japanese General Text Embeddings | ['Hayato Tsukagoshi', 'Ryohei Sasano'] | 2,024 | arXiv.org | 1 | 43 | ['Computer Science'] |
2,409.07972 | Deep Height Decoupling for Precise Vision-based 3D Occupancy Prediction | ['Yuan Wu', 'Zhiqiang Yan', 'Zhengxue Wang', 'Xiang Li', 'Le Hui', 'Jian Yang'] | ['cs.CV'] | The task of vision-based 3D occupancy prediction aims to reconstruct 3D
geometry and estimate its semantic classes from 2D color images, where the
2D-to-3D view transformation is an indispensable step. Most previous methods
conduct forward projection, such as BEVPooling and VoxelPooling, both of which
map the 2D image ... | 2024-09-12T12:12:19Z | null | null | null | null | null | null | null | null | null | null |
2,409.08107 | WhisperNER: Unified Open Named Entity and Speech Recognition | ['Gil Ayache', 'Menachem Pirchi', 'Aviv Navon', 'Aviv Shamsian', 'Gill Hetz', 'Joseph Keshet'] | ['cs.CL', 'cs.LG'] | Integrating named entity recognition (NER) with automatic speech recognition
(ASR) can significantly enhance transcription accuracy and informativeness. In
this paper, we introduce WhisperNER, a novel model that allows joint speech
transcription and entity recognition. WhisperNER supports open-type NER,
enabling recogn... | 2024-09-12T15:00:56Z | null | null | null | WhisperNER: Unified Open Named Entity and Speech Recognition | ['Gil Ayache', 'Menachem Pirchi', 'Aviv Navon', 'Aviv Shamsian', 'Gil Hetz', 'Joseph Keshet'] | 2,024 | arXiv.org | 1 | 18 | ['Computer Science'] |
2,409.0824 | IFAdapter: Instance Feature Control for Grounded Text-to-Image
Generation | ['Yinwei Wu', 'Xianpan Zhou', 'Bing Ma', 'Xuefeng Su', 'Kai Ma', 'Xinchao Wang'] | ['cs.CV', 'cs.AI'] | While Text-to-Image (T2I) diffusion models excel at generating visually
appealing images of individual instances, they struggle to accurately position
and control the features generation of multiple instances. The Layout-to-Image
(L2I) task was introduced to address the positioning challenges by
incorporating bounding ... | 2024-09-12T17:39:23Z | null | null | null | IFAdapter: Instance Feature Control for Grounded Text-to-Image Generation | ['Yinwei Wu', 'Xianpan Zhou', 'Bing Ma', 'Xuefeng Su', 'Kai Ma', 'Xinchao Wang'] | 2,024 | arXiv.org | 7 | 66 | ['Computer Science'] |
2,409.08264 | Windows Agent Arena: Evaluating Multi-Modal OS Agents at Scale | ['Rogerio Bonatti', 'Dan Zhao', 'Francesco Bonacci', 'Dillon Dupont', 'Sara Abdali', 'Yinheng Li', 'Yadong Lu', 'Justin Wagle', 'Kazuhito Koishida', 'Arthur Bucker', 'Lawrence Jang', 'Zack Hui'] | ['cs.AI'] | Large language models (LLMs) show remarkable potential to act as computer
agents, enhancing human productivity and software accessibility in multi-modal
tasks that require planning and reasoning. However, measuring agent performance
in realistic environments remains a challenge since: (i) most benchmarks are
limited to... | 2024-09-12T17:56:43Z | null | null | null | null | null | null | null | null | null | null |
2,409.08276 | AnySkin: Plug-and-play Skin Sensing for Robotic Touch | ['Raunaq Bhirangi', 'Venkatesh Pattabiraman', 'Enes Erciyes', 'Yifeng Cao', 'Tess Hellebrekers', 'Lerrel Pinto'] | ['cs.RO', 'cs.AI'] | While tactile sensing is widely accepted as an important and useful sensing
modality, its use pales in comparison to other sensory modalities like vision
and proprioception. AnySkin addresses the critical challenges that impede the
use of tactile sensing -- versatility, replaceability, and data reusability.
Building on... | 2024-09-12T17:59:44Z | null | null | null | null | null | null | null | null | null | null |
2,409.08425 | SoloAudio: Target Sound Extraction with Language-oriented Audio
Diffusion Transformer | ['Helin Wang', 'Jiarui Hai', 'Yen-Ju Lu', 'Karan Thakkar', 'Mounya Elhilali', 'Najim Dehak'] | ['eess.AS', 'cs.SD'] | In this paper, we introduce SoloAudio, a novel diffusion-based generative
model for target sound extraction (TSE). Our approach trains latent diffusion
models on audio, replacing the previous U-Net backbone with a skip-connected
Transformer that operates on latent features. SoloAudio supports both
audio-oriented and la... | 2024-09-12T23:12:25Z | Submitted to ICASSP 2025 | null | null | null | null | null | null | null | null | null |
2,409.08513 | Mamba-YOLO-World: Marrying YOLO-World with Mamba for Open-Vocabulary
Detection | ['Haoxuan Wang', 'Qingdong He', 'Jinlong Peng', 'Hao Yang', 'Mingmin Chi', 'Yabiao Wang'] | ['cs.CV'] | Open-vocabulary detection (OVD) aims to detect objects beyond a predefined
set of categories. As a pioneering model incorporating the YOLO series into
OVD, YOLO-World is well-suited for scenarios prioritizing speed and efficiency.
However, its performance is hindered by its neck feature fusion mechanism,
which causes t... | 2024-09-13T03:23:52Z | null | null | null | null | null | null | null | null | null | null |
2,409.08523 | Eir: Thai Medical Large Language Models | ['Yutthakorn Thiprak', 'Rungtam Ngodngamthaweesuk', 'Songtam Ngodngamtaweesuk'] | ['cs.CL'] | We present Eir-8B, a large language model with 8 billion parameters,
specifically designed to enhance the accuracy of handling medical tasks in the
Thai language. This model focuses on providing clear and easy-to-understand
answers for both healthcare professionals and patients, thereby improving the
efficiency of diag... | 2024-09-13T04:06:00Z | typos corrected, and references added | null | null | null | null | null | null | null | null | null |
2,409.08589 | Domain-Invariant Representation Learning of Bird Sounds | ['Ilyass Moummad', 'Romain Serizel', 'Emmanouil Benetos', 'Nicolas Farrugia'] | ['cs.SD', 'eess.AS'] | Passive acoustic monitoring (PAM) is crucial for bioacoustic research,
enabling non-invasive species tracking and biodiversity monitoring. Citizen
science platforms provide large annotated datasets from focal recordings, where
the target species is intentionally recorded. However, PAM requires monitoring
in passive sou... | 2024-09-13T07:09:17Z | null | null | null | Domain-Invariant Representation Learning of Bird Sounds | ['Ilyass Moummad', 'Romain Serizel', 'Emmanouil Benetos', 'Nicolas Farrugia'] | 2,024 | arXiv.org | 2 | 39 | ['Computer Science', 'Engineering'] |
2,409.08695 | Precision Aquaculture: An Integrated Computer Vision and IoT Approach
for Optimized Tilapia Feeding | ['Rania Hossam', 'Ahmed Heakl', 'Walid Gomaa'] | ['cs.CV', 'cs.AI', 'cs.LG', 'cs.RO', 'cs.SY', 'eess.SY'] | Traditional fish farming practices often lead to inefficient feeding,
resulting in environmental issues and reduced productivity. We developed an
innovative system combining computer vision and IoT technologies for precise
Tilapia feeding. Our solution uses real-time IoT sensors to monitor water
quality parameters and ... | 2024-09-13T10:27:27Z | 8 pages, 6 figures, 3 tables, 21th International Conference on
Informatics in Control, Automation, and Robotics | null | null | null | null | null | null | null | null | null |
2,409.08857 | InstantDrag: Improving Interactivity in Drag-based Image Editing | ['Joonghyuk Shin', 'Daehyeon Choi', 'Jaesik Park'] | ['cs.CV'] | Drag-based image editing has recently gained popularity for its interactivity
and precision. However, despite the ability of text-to-image models to generate
samples within a second, drag editing still lags behind due to the challenge of
accurately reflecting user interaction while maintaining image content. Some
exist... | 2024-09-13T14:19:27Z | SIGGRAPH Asia 2024. Project webpage:
https://joonghyuk.com/instantdrag-web/ | null | null | InstantDrag: Improving Interactivity in Drag-based Image Editing | ['Joonghyuk Shin', 'Daehyeon Choi', 'Jaesik Park'] | 2,024 | ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia | 8 | 52 | ['Computer Science'] |
2,409.09001 | E2MoCase: A Dataset for Emotional, Event and Moral Observations in News
Articles on High-impact Legal Cases | ['Candida M. Greco', 'Lorenzo Zangari', 'Davide Picca', 'Andrea Tagarelli'] | ['cs.CL', 'cs.AI', 'cs.CY', 'cs.DL', 'physics.soc-ph'] | The way media reports on legal cases can significantly shape public opinion,
often embedding subtle biases that influence societal views on justice and
morality. Analyzing these biases requires a holistic approach that captures the
emotional tone, moral framing, and specific events within the narratives. In
this work w... | 2024-09-13T17:31:09Z | null | null | null | null | null | null | null | null | null | null |
2,409.09143 | DomURLs_BERT: Pre-trained BERT-based Model for Malicious Domains and
URLs Detection and Classification | ['Abdelkader El Mahdaouy', 'Salima Lamsiyah', 'Meryem Janati Idrissi', 'Hamza Alami', 'Zakaria Yartaoui', 'Ismail Berrada'] | ['cs.CR', 'cs.CL', '68T07, 68M25', 'I.2; C.2'] | Detecting and classifying suspicious or malicious domain names and URLs is
fundamental task in cybersecurity. To leverage such indicators of compromise,
cybersecurity vendors and practitioners often maintain and update blacklists of
known malicious domains and URLs. However, blacklists frequently fail to
identify emerg... | 2024-09-13T18:59:13Z | null | null | null | null | null | null | null | null | null | null |
2,409.09144 | PrimeDepth: Efficient Monocular Depth Estimation with a Stable Diffusion
Preimage | ['Denis Zavadski', 'Damjan Kalšan', 'Carsten Rother'] | ['cs.CV'] | This work addresses the task of zero-shot monocular depth estimation. A
recent advance in this field has been the idea of utilising Text-to-Image
foundation models, such as Stable Diffusion. Foundation models provide a rich
and generic image representation, and therefore, little training data is
required to reformulate... | 2024-09-13T19:03:48Z | null | null | null | null | null | null | null | null | null | null |
2,409.09173 | Phikon-v2, A large and public feature extractor for biomarker prediction | ['Alexandre Filiot', 'Paul Jacob', 'Alice Mac Kain', 'Charlie Saillard'] | ['eess.IV', 'cs.AI', 'cs.CV'] | Gathering histopathology slides from over 100 publicly available cohorts, we
compile a diverse dataset of 460 million pathology tiles covering more than 30
cancer sites. Using this dataset, we train a large self-supervised vision
transformer using DINOv2 and publicly release one iteration of this model for
further expe... | 2024-09-13T20:12:29Z | null | null | null | null | null | null | null | null | null | null |
2,409.09305 | The T05 System for The VoiceMOS Challenge 2024: Transfer Learning from
Deep Image Classifier to Naturalness MOS Prediction of High-Quality Synthetic
Speech | ['Kaito Baba', 'Wataru Nakata', 'Yuki Saito', 'Hiroshi Saruwatari'] | ['cs.SD', 'cs.AI', 'cs.LG', 'eess.AS'] | We present our system (denoted as T05) for the VoiceMOS Challenge (VMC) 2024.
Our system was designed for the VMC 2024 Track 1, which focused on the accurate
prediction of naturalness mean opinion score (MOS) for high-quality synthetic
speech. In addition to a pretrained self-supervised learning (SSL)-based speech
feat... | 2024-09-14T05:03:18Z | Accepted by IEEE SLT 2024. Our MOS prediction system (UTMOSv2) is
available in https://github.com/sarulab-speech/UTMOSv2 | null | null | The T05 System for the voicemos challenge 2024: Transfer Learning from Deep Image Classifier to Naturalness MOS Prediction of High-Quality Synthetic Speech | ['Kaito Baba', 'Wataru Nakata', 'Yuki Saito', 'Hiroshi Saruwatari'] | 2,024 | Spoken Language Technology Workshop | 17 | 31 | ['Computer Science', 'Engineering'] |
2,409.09353 | Overcoming linguistic barriers in code assistants: creating a QLoRA
adapter to improve support for Russian-language code writing instructions | ['C. B. Pronin', 'A. V. Volosova', 'A. V. Ostroukh', 'Yu. N. Strogov'] | ['cs.SE', 'cs.CL', 'cs.LG'] | In this paper, an approach to training and evaluating an adapter model for
the popular language model "zephyr-7b-beta" is described. The adapter was
developed to improve the performance of the base model in tasks related to
programming and understanding the Russian language. Considering the high
quality of the original... | 2024-09-14T07:49:29Z | 10 pages, 4 figures | null | null | Overcoming linguistic barriers in code assistants: creating a QLoRA adapter to improve support for Russian-language code writing instructions | ['C. B. Pronin', 'A. V. Volosova', 'A. Ostroukh', 'Yu. N. Strogov'] | 2,024 | Dynamics of Complex Systems - XXI century | 1 | 2 | ['Computer Science'] |
2,409.09741 | Benchmarking LLMs in Political Content Text-Annotation: Proof-of-Concept
with Toxicity and Incivility Data | ['Bastián González-Bustamante'] | ['cs.CL', 'cs.AI', '68T50 (Primary) 91F10, 91F20 (Secondary)'] | This article benchmarked the ability of OpenAI's GPTs and a number of
open-source LLMs to perform annotation tasks on political content. We used a
novel protest event dataset comprising more than three million digital
interactions and created a gold standard that includes ground-truth labels
annotated by human coders a... | 2024-09-15T14:11:24Z | Paper prepared for delivery at the 8th Monash-Warwick-Zurich
Text-as-Data Workshop, September 16-17, 2024: 11 pages, 3 tables, 3 figures | null | null | Benchmarking LLMs in Political Content Text-Annotation: Proof-of-Concept with Toxicity and Incivility Data | ["Basti'an Gonz'alez-Bustamante"] | 2,024 | arXiv.org | 2 | 39 | ['Computer Science'] |
2,409.09788 | Reasoning Paths with Reference Objects Elicit Quantitative Spatial
Reasoning in Large Vision-Language Models | ['Yuan-Hong Liao', 'Rafid Mahmood', 'Sanja Fidler', 'David Acuna'] | ['cs.CV', 'cs.CL'] | Despite recent advances demonstrating vision-language models' (VLMs)
abilities to describe complex relationships in images using natural language,
their capability to quantitatively reason about object sizes and distances
remains underexplored. In this work, we introduce a manually annotated
benchmark, Q-Spatial Bench,... | 2024-09-15T16:45:42Z | 20 pages, 13 figures | null | null | null | null | null | null | null | null | null |
2,409.09811 | PROSE-FD: A Multimodal PDE Foundation Model for Learning Multiple
Operators for Forecasting Fluid Dynamics | ['Yuxuan Liu', 'Jingmin Sun', 'Xinjie He', 'Griffin Pinney', 'Zecheng Zhang', 'Hayden Schaeffer'] | ['cs.LG', 'cs.NA', 'math.NA', 'physics.flu-dyn'] | We propose PROSE-FD, a zero-shot multimodal PDE foundational model for
simultaneous prediction of heterogeneous two-dimensional physical systems
related to distinct fluid dynamics settings. These systems include shallow
water equations and the Navier-Stokes equations with incompressible and
compressible flow, regular a... | 2024-09-15T18:20:15Z | null | null | null | PROSE-FD: A Multimodal PDE Foundation Model for Learning Multiple Operators for Forecasting Fluid Dynamics | ['Yuxuan Liu', 'Jingmin Sun', 'Xinjie He', 'Griffin Pinney', 'Zecheng Zhang', 'Hayden Schaeffer'] | 2,024 | arXiv.org | 9 | 50 | ['Computer Science', 'Mathematics', 'Physics'] |
2,409.10103 | Self-Supervised Syllable Discovery Based on Speaker-Disentangled HuBERT | ['Ryota Komatsu', 'Takahiro Shinozaki'] | ['cs.CL', 'cs.SD', 'eess.AS'] | Self-supervised speech representation learning has become essential for
extracting meaningful features from untranscribed audio. Recent advances
highlight the potential of deriving discrete symbols from the features
correlated with linguistic units, which enables text-less training across
diverse tasks. In particular, ... | 2024-09-16T09:07:08Z | Accepted by IEEE SLT 2024 | null | null | Self-Supervised Syllable Discovery Based on Speaker-Disentangled Hubert | ['Ryota Komatsu', 'Takahiro Shinozaki'] | 2,024 | Spoken Language Technology Workshop | 1 | 29 | ['Computer Science', 'Engineering'] |
2,409.10164 | Quantile Regression for Distributional Reward Models in RLHF | ['Nicolai Dorka'] | ['cs.LG', 'cs.AI', 'cs.CL'] | Reinforcement learning from human feedback (RLHF) has become a key method for
aligning large language models (LLMs) with human preferences through the use of
reward models. However, traditional reward models typically generate point
estimates, which oversimplify the diversity and complexity of human values and
preferen... | 2024-09-16T10:54:04Z | null | null | null | Quantile Regression for Distributional Reward Models in RLHF | ['Nicolai Dorka'] | 2,024 | arXiv.org | 26 | 36 | ['Computer Science'] |
2,409.10168 | Algorithmic Behaviors Across Regions: A Geolocation Audit of YouTube
Search for COVID-19 Misinformation Between the United States and South Africa | ['Hayoung Jung', 'Prerna Juneja', 'Tanushree Mitra'] | ['cs.CY', 'cs.AI', 'cs.HC'] | Despite being an integral tool for finding health-related information online,
YouTube has faced criticism for disseminating COVID-19 misinformation globally
to its users. Yet, prior audit studies have predominantly investigated YouTube
within the Global North contexts, often overlooking the Global South. To
address thi... | 2024-09-16T10:56:43Z | 30 pages. Accepted at ICWSM 2025 | null | null | Algorithmic Behaviors Across Regions: A Geolocation Audit of YouTube Search for COVID-19 Misinformation between the United States and South Africa | ['Hayoung Jung', 'Prerna Juneja', 'Tanushree Mitra'] | 2,024 | International Conference on Web and Social Media | 1 | 92 | ['Computer Science'] |
2,409.10173 | jina-embeddings-v3: Multilingual Embeddings With Task LoRA | ['Saba Sturua', 'Isabelle Mohr', 'Mohammad Kalim Akram', 'Michael Günther', 'Bo Wang', 'Markus Krimmel', 'Feng Wang', 'Georgios Mastrapas', 'Andreas Koukounas', 'Nan Wang', 'Han Xiao'] | ['cs.CL', 'cs.AI', 'cs.IR', '68T50', 'I.2.7'] | We introduce jina-embeddings-v3, a novel text embedding model with 570
million parameters, achieves state-of-the-art performance on multilingual data
and long-context retrieval tasks, supporting context lengths of up to 8192
tokens. The model includes a set of task-specific Low-Rank Adaptation (LoRA)
adapters to genera... | 2024-09-16T11:10:29Z | 20 pages, pp11-13 references, pp14-20 appendix and experiment tables | null | null | jina-embeddings-v3: Multilingual Embeddings With Task LoRA | ['Saba Sturua', 'Isabelle Mohr', 'Mohammad Kalim Akram', 'Michael Gunther', 'Bo Wang', 'Markus Krimmel', 'Feng Wang', 'Georgios Mastrapas', 'Andreas Koukounas', 'Nan Wang', 'Han Xiao'] | 2,024 | arXiv.org | 36 | 47 | ['Computer Science'] |
2,409.10309 | beeFormer: Bridging the Gap Between Semantic and Interaction Similarity
in Recommender Systems | ['Vojtěch Vančura', 'Pavel Kordík', 'Milan Straka'] | ['cs.IR'] | Recommender systems often use text-side information to improve their
predictions, especially in cold-start or zero-shot recommendation scenarios,
where traditional collaborative filtering approaches cannot be used. Many
approaches to text-mining side information for recommender systems have been
proposed over recent ye... | 2024-09-16T14:15:42Z | Accepted to RecSys 2024 | null | 10.1145/3640457.3691707 | null | null | null | null | null | null | null |
2,409.10594 | Kolmogorov-Arnold Transformer | ['Xingyi Yang', 'Xinchao Wang'] | ['cs.LG', 'cs.AI', 'cs.CV', 'cs.NE'] | Transformers stand as the cornerstone of mordern deep learning.
Traditionally, these models rely on multi-layer perceptron (MLP) layers to mix
the information between channels. In this paper, we introduce the
Kolmogorov-Arnold Transformer (KAT), a novel architecture that replaces MLP
layers with Kolmogorov-Arnold Netwo... | 2024-09-16T17:54:51Z | Code: https://github.com/Adamdad/kat | null | null | null | null | null | null | null | null | null |
2,409.10721 | A Missing Data Imputation GAN for Character Sprite Generation | ['Flávio Coutinho', 'Luiz Chaimowicz'] | ['cs.CV', 'cs.AI', 'cs.GR'] | Creating and updating pixel art character sprites with many frames spanning
different animations and poses takes time and can quickly become repetitive.
However, that can be partially automated to allow artists to focus on more
creative tasks. In this work, we concentrate on creating pixel art character
sprites in a ta... | 2024-09-16T20:50:32Z | Published in SBGames 2024 | null | null | null | null | null | null | null | null | null |
2,409.10753 | Investigating Training Objectives for Generative Speech Enhancement | ['Julius Richter', 'Danilo de Oliveira', 'Timo Gerkmann'] | ['eess.AS', 'cs.SD'] | Generative speech enhancement has recently shown promising advancements in
improving speech quality in noisy environments. Multiple diffusion-based
frameworks exist, each employing distinct training objectives and learning
techniques. This paper aims to explain the differences between these frameworks
by focusing our i... | 2024-09-16T21:47:52Z | Accepted at ICASSP 2025 | null | null | Investigating Training Objectives for Generative Speech Enhancement | ['Julius Richter', 'Danilo de Oliveira', 'Timo Gerkmann'] | 2,024 | IEEE International Conference on Acoustics, Speech, and Signal Processing | 6 | 37 | ['Engineering', 'Computer Science'] |
2,409.10819 | EzAudio: Enhancing Text-to-Audio Generation with Efficient Diffusion
Transformer | ['Jiarui Hai', 'Yong Xu', 'Hao Zhang', 'Chenxing Li', 'Helin Wang', 'Mounya Elhilali', 'Dong Yu'] | ['eess.AS', 'cs.SD'] | We introduce EzAudio, a text-to-audio (T2A) generation framework designed to
produce high-quality, natural-sounding sound effects. Core designs include: (1)
We propose EzAudio-DiT, an optimized Diffusion Transformer (DiT) designed for
audio latent representations, improving convergence speed, as well as parameter
and m... | 2024-09-17T01:27:28Z | Accepted at Interspeech 2025 | null | null | null | null | null | null | null | null | null |
2,409.10994 | Less is More: A Simple yet Effective Token Reduction Method for
Efficient Multi-modal LLMs | ['Dingjie Song', 'Wenjun Wang', 'Shunian Chen', 'Xidong Wang', 'Michael Guan', 'Benyou Wang'] | ['cs.CL', 'cs.AI', 'cs.CV', 'cs.MM'] | The rapid advancement of Multimodal Large Language Models (MLLMs) has led to
remarkable performances across various domains. However, this progress is
accompanied by a substantial surge in the resource consumption of these models.
We address this pressing issue by introducing a new approach, Token Reduction
using CLIP ... | 2024-09-17T08:56:27Z | Accepted to COLING 2025 | null | null | null | null | null | null | null | null | null |
2,409.10999 | Enhancing Low-Resource Language and Instruction Following Capabilities
of Audio Language Models | ['Potsawee Manakul', 'Guangzhi Sun', 'Warit Sirichotedumrong', 'Kasima Tharnpipitchai', 'Kunat Pipatanakul'] | ['cs.CL', 'cs.AI', 'cs.SD', 'eess.AS'] | Audio language models process audio inputs using textual prompts for tasks
like speech recognition and audio captioning. Although built on multilingual
pre-trained components, most are trained primarily on English, limiting their
usability for other languages. This paper evaluates audio language models on
Thai, a low-r... | 2024-09-17T09:04:03Z | Interspeech 2025 | null | null | Enhancing Low-Resource Language and Instruction Following Capabilities of Audio Language Models | ['Potsawee Manakul', 'Guangzhi Sun', 'Warit Sirichotedumrong', 'Kasima Tharnpipitchai', 'Kunat Pipatanakul'] | 2,024 | arXiv.org | 7 | 39 | ['Computer Science', 'Engineering'] |
2,409.11051 | Down-Sampling Inter-Layer Adapter for Parameter and Computation
Efficient Ultra-Fine-Grained Image Recognition | ['Edwin Arkel Rios', 'Femiloye Oyerinde', 'Min-Chun Hu', 'Bo-Cheng Lai'] | ['cs.CV', 'I.2, I.4'] | Ultra-fine-grained image recognition (UFGIR) categorizes objects with
extremely small differences between classes, such as distinguishing between
cultivars within the same species, as opposed to species-level classification
in fine-grained image recognition (FGIR). The difficulty of this task is
exacerbated due to the ... | 2024-09-17T10:17:34Z | Accepted to ECCV 2024 Workshop on Efficient Deep Learning for
Foundation Models (EFM). Main: 13 pages, 3 figures, 2 tables. Appendix: 3
pages, 1 table. Total: 16 pages, 3 figures, 4 tables | null | null | null | null | null | null | null | null | null |
2,409.11059 | OneEncoder: A Lightweight Framework for Progressive Alignment of
Modalities | ['Bilal Faye', 'Hanane Azzag', 'Mustapha Lebbah'] | ['cs.CV', 'cs.LG'] | Cross-modal alignment Learning integrates information from different
modalities like text, image, audio and video to create unified models. This
approach develops shared representations and learns correlations between
modalities, enabling applications such as visual question answering and
audiovisual content analysis. ... | 2024-09-17T10:38:46Z | null | null | null | OneEncoder: A Lightweight Framework for Progressive Alignment of Modalities | ['Bilal Faye', 'Hanane Azzag', 'M. Lebbah'] | 2,024 | arXiv.org | 0 | 80 | ['Computer Science'] |
2,409.11136 | Promptriever: Instruction-Trained Retrievers Can Be Prompted Like
Language Models | ['Orion Weller', 'Benjamin Van Durme', 'Dawn Lawrie', 'Ashwin Paranjape', 'Yuhao Zhang', 'Jack Hessel'] | ['cs.IR', 'cs.CL', 'cs.LG'] | Instruction-tuned language models (LM) are able to respond to imperative
commands, providing a more natural user interface compared to their base
counterparts. In this work, we present Promptriever, the first retrieval model
able to be prompted like an LM. To train Promptriever, we curate and release a
new instance-lev... | 2024-09-17T12:42:55Z | null | null | null | Promptriever: Instruction-Trained Retrievers Can Be Prompted Like Language Models | ['Orion Weller', 'Benjamin Van Durme', 'Dawn J. Lawrie', 'Ashwin Paranjape', 'Yuhao Zhang', 'Jack Hessel'] | 2,024 | arXiv.org | 25 | 44 | ['Computer Science'] |
2,409.11272 | LOLA -- An Open-Source Massively Multilingual Large Language Model | ['Nikit Srivastava', 'Denis Kuchelev', 'Tatiana Moteu Ngoli', 'Kshitij Shetty', 'Michael Röder', 'Hamada Zahera', 'Diego Moussallem', 'Axel-Cyrille Ngonga Ngomo'] | ['cs.CL', 'cs.AI', 'cs.LG'] | This paper presents LOLA, a massively multilingual large language model
trained on more than 160 languages using a sparse Mixture-of-Experts
Transformer architecture. Our architectural and implementation choices address
the challenge of harnessing linguistic diversity while maintaining efficiency
and avoiding the commo... | 2024-09-17T15:23:08Z | null | Proceedings of the 31st International Conference on Computational
Linguistics (COLING 2025), "LOLA - An Open-Source Massively Multilingual
Large Language Model", ACL Anthology,
https://aclanthology.org/2025.coling-main.428/ | null | LOLA - An Open-Source Massively Multilingual Large Language Model | ['Nikit Srivastava', 'Denis Kuchelev', 'Tatiana Moteu', 'Kshitij Shetty', 'Michael Roeder', 'Diego Moussallem', 'Hamada M. Zahera', 'A. Ngomo'] | 2,024 | arXiv.org | 2 | 0 | ['Computer Science'] |
2,409.1134 | OmniGen: Unified Image Generation | ['Shitao Xiao', 'Yueze Wang', 'Junjie Zhou', 'Huaying Yuan', 'Xingrun Xing', 'Ruiran Yan', 'Chaofan Li', 'Shuting Wang', 'Tiejun Huang', 'Zheng Liu'] | ['cs.CV', 'cs.AI'] | The emergence of Large Language Models (LLMs) has unified language generation
tasks and revolutionized human-machine interaction. However, in the realm of
image generation, a unified model capable of handling various tasks within a
single framework remains largely unexplored. In this work, we introduce
OmniGen, a new d... | 2024-09-17T16:42:46Z | Update the paper for OmniGen-v1 | null | null | OmniGen: Unified Image Generation | ['Shitao Xiao', 'Yueze Wang', 'Junjie Zhou', 'Huaying Yuan', 'Xingrun Xing', 'Ruiran Yan', 'Shuting Wang', 'Tiejun Huang', 'Zheng Liu'] | 2,024 | arXiv.org | 88 | 80 | ['Computer Science'] |
2,409.11402 | NVLM: Open Frontier-Class Multimodal LLMs | ['Wenliang Dai', 'Nayeon Lee', 'Boxin Wang', 'Zhuolin Yang', 'Zihan Liu', 'Jon Barker', 'Tuomas Rintamaki', 'Mohammad Shoeybi', 'Bryan Catanzaro', 'Wei Ping'] | ['cs.CL', 'cs.AI', 'cs.CV', 'cs.LG', 'cs.MM'] | We introduce NVLM 1.0, a family of frontier-class multimodal large language
models (LLMs) that achieve state-of-the-art results on vision-language tasks,
rivaling the leading proprietary models (e.g., GPT-4o) and open-access models
(e.g., Llama 3-V 405B and InternVL 2). Remarkably, NVLM 1.0 shows improved
text-only per... | 2024-09-17T17:59:06Z | Fixed the typos. For more information, please visit our project page
at: https://research.nvidia.com/labs/adlr/NVLM-1 | null | null | null | null | null | null | null | null | null |
2,409.11404 | AraDiCE: Benchmarks for Dialectal and Cultural Capabilities in LLMs | ['Basel Mousi', 'Nadir Durrani', 'Fatema Ahmad', 'Md. Arid Hasan', 'Maram Hasanain', 'Tameem Kabbani', 'Fahim Dalvi', 'Shammur Absar Chowdhury', 'Firoj Alam'] | ['cs.CL', 'cs.AI', '68T50', 'F.2.2; I.2.7'] | Arabic, with its rich diversity of dialects, remains significantly
underrepresented in Large Language Models, particularly in dialectal
variations. We address this gap by introducing seven synthetic datasets in
dialects alongside Modern Standard Arabic (MSA), created using Machine
Translation (MT) combined with human p... | 2024-09-17T17:59:25Z | Benchmarking, Culturally Informed, Large Language Models, Arabic NLP,
LLMs, Arabic Dialect, Dialectal Benchmarking | null | null | AraDiCE: Benchmarks for Dialectal and Cultural Capabilities in LLMs | ['Basel Mousi', 'Nadir Durrani', 'Fatema Ahmad', 'Md Arid Hasan', 'Maram Hasanain', 'Tameem Kabbani', 'Fahim Dalvi', 'Shammur A. Chowdhury', 'Firoj Alam'] | 2,024 | arXiv.org | 9 | 82 | ['Computer Science'] |
2,409.11406 | Phidias: A Generative Model for Creating 3D Content from Text, Image,
and 3D Conditions with Reference-Augmented Diffusion | ['Zhenwei Wang', 'Tengfei Wang', 'Zexin He', 'Gerhard Hancke', 'Ziwei Liu', 'Rynson W. H. Lau'] | ['cs.CV'] | In 3D modeling, designers often use an existing 3D model as a reference to
create new ones. This practice has inspired the development of Phidias, a novel
generative model that uses diffusion for reference-augmented 3D generation.
Given an image, our method leverages a retrieved or user-provided 3D reference
model to g... | 2024-09-17T17:59:33Z | Project page: https://RAG-3D.github.io/ | null | null | null | null | null | null | null | null | null |
2,409.115 | Multi-Document Grounded Multi-Turn Synthetic Dialog Generation | ['Young-Suk Lee', 'Chulaka Gunasekara', 'Danish Contractor', 'Ramón Fernandez Astudillo', 'Radu Florian'] | ['cs.CL', 'cs.AI'] | We introduce a technique for multi-document grounded multi-turn synthetic
dialog generation that incorporates three main ideas. First, we control the
overall dialog flow using taxonomy-driven user queries that are generated with
Chain-of-Thought (CoT) prompting. Second, we support the generation of
multi-document groun... | 2024-09-17T19:02:39Z | null | null | null | null | null | null | null | null | null | null |
2,409.11635 | PainDiffusion: Learning to Express Pain | ['Quang Tien Dam', 'Tri Tung Nguyen Nguyen', 'Yuki Endo', 'Dinh Tuan Tran', 'Joo-Ho Lee'] | ['cs.CV'] | Accurate pain expression synthesis is essential for improving clinical
training and human-robot interaction. Current Robotic Patient Simulators (RPSs)
lack realistic pain facial expressions, limiting their effectiveness in medical
training. In this work, we introduce PainDiffusion, a generative model that
synthesizes n... | 2024-09-18T01:55:00Z | 8 pages, 9 figures | null | null | null | null | null | null | null | null | null |
2,409.11923 | Agglomerative Token Clustering | ['Joakim Bruslund Haurum', 'Sergio Escalera', 'Graham W. Taylor', 'Thomas B. Moeslund'] | ['cs.CV'] | We present Agglomerative Token Clustering (ATC), a novel token merging method
that consistently outperforms previous token merging and pruning methods across
image classification, image synthesis, and object detection & segmentation
tasks. ATC merges clusters through bottom-up hierarchical clustering, without
the intro... | 2024-09-18T12:37:58Z | ECCV 2024. Project webpage at https://vap.aau.dk/atc/ | null | null | Agglomerative Token Clustering | ['J. B. Haurum', 'Sergio Escalera', 'Graham W. Taylor', 'T. Moeslund'] | 2,024 | European Conference on Computer Vision | 4 | 74 | ['Computer Science'] |
2,409.12106 | Measuring Human and AI Values Based on Generative Psychometrics with
Large Language Models | ['Haoran Ye', 'Yuhang Xie', 'Yuanyi Ren', 'Hanjun Fang', 'Xin Zhang', 'Guojie Song'] | ['cs.CL', 'cs.AI'] | Human values and their measurement are long-standing interdisciplinary
inquiry. Recent advances in AI have sparked renewed interest in this area, with
large language models (LLMs) emerging as both tools and subjects of value
measurement. This work introduces Generative Psychometrics for Values (GPV), an
LLM-based, data... | 2024-09-18T16:26:22Z | Accepted at AAAI 2025 | null | null | Measuring Human and AI Values based on Generative Psychometrics with Large Language Models | ['Haoran Ye', 'Yuhang Xie', 'Yuanyi Ren', 'Hanjun Fang', 'Xin Zhang', 'Guojie Song'] | 2,024 | AAAI Conference on Artificial Intelligence | 2 | 111 | ['Computer Science'] |
2,409.12117 | Low Frame-rate Speech Codec: a Codec Designed for Fast High-quality
Speech LLM Training and Inference | ['Edresson Casanova', 'Ryan Langman', 'Paarth Neekhara', 'Shehzeen Hussain', 'Jason Li', 'Subhankar Ghosh', 'Ante Jukić', 'Sang-gil Lee'] | ['eess.AS', 'cs.CL', 'cs.SD'] | Large language models (LLMs) have significantly advanced audio processing
through audio codecs that convert audio into discrete tokens, enabling the
application of language modeling techniques to audio data. However, audio
codecs often operate at high frame rates, resulting in slow training and
inference, especially fo... | 2024-09-18T16:39:10Z | Submitted to ICASSP 2025 | null | null | null | null | null | null | null | null | null |
2,409.12122 | Qwen2.5-Math Technical Report: Toward Mathematical Expert Model via
Self-Improvement | ['An Yang', 'Beichen Zhang', 'Binyuan Hui', 'Bofei Gao', 'Bowen Yu', 'Chengpeng Li', 'Dayiheng Liu', 'Jianhong Tu', 'Jingren Zhou', 'Junyang Lin', 'Keming Lu', 'Mingfeng Xue', 'Runji Lin', 'Tianyu Liu', 'Xingzhang Ren', 'Zhenru Zhang'] | ['cs.CL', 'cs.AI', 'cs.LG'] | In this report, we present a series of math-specific large language models:
Qwen2.5-Math and Qwen2.5-Math-Instruct-1.5B/7B/72B. The core innovation of the
Qwen2.5 series lies in integrating the philosophy of self-improvement
throughout the entire pipeline, from pre-training and post-training to
inference: (1) During th... | 2024-09-18T16:45:37Z | null | null | null | Qwen2.5-Math Technical Report: Toward Mathematical Expert Model via Self-Improvement | ['An Yang', 'Beichen Zhang', 'Binyuan Hui', 'Bofei Gao', 'Bowen Yu', 'Chengpeng Li', 'Dayiheng Liu', 'Jianhong Tu', 'Jingren Zhou', 'Junyang Lin', 'Keming Lu', 'Mingfeng Xue', 'Runji Lin', 'Tianyu Liu', 'Xingzhang Ren', 'Zhenru Zhang'] | 2,024 | arXiv.org | 321 | 27 | ['Computer Science'] |
2,409.12136 | GRIN: GRadient-INformed MoE | ['Liyuan Liu', 'Young Jin Kim', 'Shuohang Wang', 'Chen Liang', 'Yelong Shen', 'Hao Cheng', 'Xiaodong Liu', 'Masahiro Tanaka', 'Xiaoxia Wu', 'Wenxiang Hu', 'Vishrav Chaudhary', 'Zeqi Lin', 'Chenruidong Zhang', 'Jilong Xue', 'Hany Awadalla', 'Jianfeng Gao', 'Weizhu Chen'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Mixture-of-Experts (MoE) models scale more effectively than dense models due
to sparse computation through expert routing, selectively activating only a
small subset of expert modules. However, sparse computation challenges
traditional training practices, as discrete expert routing hinders standard
backpropagation and ... | 2024-09-18T17:00:20Z | 58 pages | null | null | GRIN: GRadient-INformed MoE | ['Liyuan Liu', 'Young Jin Kim', 'Shuohang Wang', 'Chen Liang', 'Yelong Shen', 'Hao Cheng', 'Xiaodong Liu', 'Masahiro Tanaka', 'Xiaoxia Wu', 'Wenxiang Hu', 'Vishrav Chaudhary', 'Zeqi Lin', 'Chenruidong Zhang', 'Jilong Xue', 'H. Awadalla', 'Jia-Xin Gao', 'Weizhu Chen'] | 2,024 | arXiv.org | 7 | 0 | ['Computer Science'] |
2,409.12181 | A Controlled Study on Long Context Extension and Generalization in LLMs | ['Yi Lu', 'Jing Nathan Yan', 'Songlin Yang', 'Justin T. Chiu', 'Siyu Ren', 'Fei Yuan', 'Wenting Zhao', 'Zhiyong Wu', 'Alexander M. Rush'] | ['cs.CL', 'cs.LG'] | Broad textual understanding and in-context learning require language models
that utilize full document contexts. Due to the implementation challenges
associated with directly training long-context models, many methods have been
proposed for extending models to handle long contexts. However, owing to
differences in data... | 2024-09-18T17:53:17Z | null | null | null | null | null | null | null | null | null | null |
2,409.12182 | LifeGPT: Topology-Agnostic Generative Pretrained Transformer Model for
Cellular Automata | ['Jaime A. Berkovich', 'Markus J. Buehler'] | ['cs.AI', 'cond-mat.mtrl-sci', 'cond-mat.stat-mech', 'math.DS'] | Conway's Game of Life (Life), a well known algorithm within the broader class
of cellular automata (CA), exhibits complex emergent dynamics, with extreme
sensitivity to initial conditions. Modeling and predicting such intricate
behavior without explicit knowledge of the system's underlying topology
presents a significa... | 2024-09-03T11:43:16Z | null | null | null | LifeGPT: Topology-Agnostic Generative Pretrained Transformer Model for Cellular Automata | ['Jaime A. Berkovich', 'Markus J. Buehler'] | 2,024 | arXiv.org | 2 | 70 | ['Computer Science', 'Physics', 'Mathematics'] |
2,409.12186 | Qwen2.5-Coder Technical Report | ['Binyuan Hui', 'Jian Yang', 'Zeyu Cui', 'Jiaxi Yang', 'Dayiheng Liu', 'Lei Zhang', 'Tianyu Liu', 'Jiajun Zhang', 'Bowen Yu', 'Keming Lu', 'Kai Dang', 'Yang Fan', 'Yichang Zhang', 'An Yang', 'Rui Men', 'Fei Huang', 'Bo Zheng', 'Yibo Miao', 'Shanghaoran Quan', 'Yunlong Feng', 'Xingzhang Ren', 'Xuancheng Ren', 'Jingren Z... | ['cs.CL'] | In this report, we introduce the Qwen2.5-Coder series, a significant upgrade
from its predecessor, CodeQwen1.5. This series includes six models:
Qwen2.5-Coder-(0.5B/1.5B/3B/7B/14B/32B). As a code-specific model,
Qwen2.5-Coder is built upon the Qwen2.5 architecture and continues pretrained
on a vast corpus of over 5.5 t... | 2024-09-18T17:57:57Z | null | null | null | null | null | null | null | null | null | null |
2,409.12191 | Qwen2-VL: Enhancing Vision-Language Model's Perception of the World at
Any Resolution | ['Peng Wang', 'Shuai Bai', 'Sinan Tan', 'Shijie Wang', 'Zhihao Fan', 'Jinze Bai', 'Keqin Chen', 'Xuejing Liu', 'Jialin Wang', 'Wenbin Ge', 'Yang Fan', 'Kai Dang', 'Mengfei Du', 'Xuancheng Ren', 'Rui Men', 'Dayiheng Liu', 'Chang Zhou', 'Jingren Zhou', 'Junyang Lin'] | ['cs.CV', 'cs.AI', 'cs.CL'] | We present the Qwen2-VL Series, an advanced upgrade of the previous Qwen-VL
models that redefines the conventional predetermined-resolution approach in
visual processing. Qwen2-VL introduces the Naive Dynamic Resolution mechanism,
which enables the model to dynamically process images of varying resolutions
into differe... | 2024-09-18T17:59:32Z | Code is available at https://github.com/QwenLM/Qwen2-VL. arXiv admin
note: text overlap with arXiv:2408.15262 by other authors | null | null | null | null | null | null | null | null | null |
2,409.12192 | DynaMo: In-Domain Dynamics Pretraining for Visuo-Motor Control | ['Zichen Jeff Cui', 'Hengkai Pan', 'Aadhithya Iyer', 'Siddhant Haldar', 'Lerrel Pinto'] | ['cs.RO', 'cs.AI', 'cs.CV', 'cs.LG'] | Imitation learning has proven to be a powerful tool for training complex
visuomotor policies. However, current methods often require hundreds to
thousands of expert demonstrations to handle high-dimensional visual
observations. A key reason for this poor data efficiency is that visual
representations are predominantly ... | 2024-09-18T17:59:43Z | null | null | null | null | null | null | null | null | null | null |
2,409.12477 | ViolinDiff: Enhancing Expressive Violin Synthesis with Pitch Bend
Conditioning | ['Daewoong Kim', 'Hao-Wen Dong', 'Dasaem Jeong'] | ['cs.SD', 'cs.AI', 'cs.LG', 'eess.AS', 'eess.SP'] | Modeling the natural contour of fundamental frequency (F0) plays a critical
role in music audio synthesis. However, transcribing and managing multiple F0
contours in polyphonic music is challenging, and explicit F0 contour modeling
has not yet been explored for polyphonic instrumental synthesis. In this paper,
we prese... | 2024-09-19T05:39:19Z | Accepted for publication at ICASSP 2025 | null | null | null | null | null | null | null | null | null |
2,409.12558 | RAD-Bench: Evaluating Large Language Models Capabilities in Retrieval
Augmented Dialogues | ['Tzu-Lin Kuo', 'Feng-Ting Liao', 'Mu-Wei Hsieh', 'Fu-Chieh Chang', 'Po-Chun Hsu', 'Da-Shan Shiu'] | ['cs.CL'] | In real-world applications with Large Language Models (LLMs), external
retrieval mechanisms - such as Search-Augmented Generation (SAG), tool
utilization, and Retrieval-Augmented Generation (RAG) - are often employed to
enhance the quality of augmented generations in dialogues. These approaches
often come with multi-tu... | 2024-09-19T08:26:45Z | null | null | null | null | null | null | null | null | null | null |
2,409.12576 | StoryMaker: Towards Holistic Consistent Characters in Text-to-image
Generation | ['Zhengguang Zhou', 'Jing Li', 'Huaxia Li', 'Nemo Chen', 'Xu Tang'] | ['cs.CV'] | Tuning-free personalized image generation methods have achieved significant
success in maintaining facial consistency, i.e., identities, even with multiple
characters. However, the lack of holistic consistency in scenes with multiple
characters hampers these methods' ability to create a cohesive narrative. In
this pape... | 2024-09-19T08:53:06Z | 12 pages, 5 figures | null | null | null | null | null | null | null | null | null |
2,409.12737 | MEXMA: Token-level objectives improve sentence representations | ['João Maria Janeiro', 'Benjamin Piwowarski', 'Patrick Gallinari', 'Loïc Barrault'] | ['cs.CL', 'cs.AI'] | Current pre-trained cross-lingual sentence encoders approaches use
sentence-level objectives only. This can lead to loss of information,
especially for tokens, which then degrades the sentence representation. We
propose MEXMA, a novel approach that integrates both sentence-level and
token-level objectives. The sentence... | 2024-09-19T13:00:29Z | 11 pages, 12 figures | null | null | MEXMA: Token-level objectives improve sentence representations | ['Joao Maria Janeiro', 'Benjamin Piwowarski', 'P. Gallinari', 'L. Barrault'] | 2,024 | arXiv.org | 2 | 44 | ['Computer Science'] |
2,409.1274 | HLLM: Enhancing Sequential Recommendations via Hierarchical Large
Language Models for Item and User Modeling | ['Junyi Chen', 'Lu Chi', 'Bingyue Peng', 'Zehuan Yuan'] | ['cs.IR', 'cs.AI'] | Large Language Models (LLMs) have achieved remarkable success in various
fields, prompting several studies to explore their potential in recommendation
systems. However, these attempts have so far resulted in only modest
improvements over traditional recommendation models. Moreover, three critical
questions remain unde... | 2024-09-19T13:03:07Z | null | null | null | HLLM: Enhancing Sequential Recommendations via Hierarchical Large Language Models for Item and User Modeling | ['Junyi Chen', 'Lu Chi', 'Bingyue Peng', 'Zehuan Yuan'] | 2,024 | arXiv.org | 29 | 39 | ['Computer Science'] |
2,409.12822 | Language Models Learn to Mislead Humans via RLHF | ['Jiaxin Wen', 'Ruiqi Zhong', 'Akbir Khan', 'Ethan Perez', 'Jacob Steinhardt', 'Minlie Huang', 'Samuel R. Bowman', 'He He', 'Shi Feng'] | ['cs.CL'] | Language models (LMs) can produce errors that are hard to detect for humans,
especially when the task is complex. RLHF, the most popular post-training
method, may exacerbate this problem: to achieve higher rewards, LMs might get
better at convincing humans that they are right even when they are wrong. We
study this phe... | 2024-09-19T14:50:34Z | null | null | null | Language Models Learn to Mislead Humans via RLHF | ['Jiaxin Wen', 'Ruiqi Zhong', 'Akbir Khan', 'Ethan Perez', 'Jacob Steinhardt', 'Minlie Huang', 'Samuel R. Bowman', 'He He', 'Shi Feng'] | 2,024 | International Conference on Learning Representations | 44 | 35 | ['Computer Science'] |
2,409.12883 | Improving Prototypical Parts Abstraction for Case-Based Reasoning
Explanations Designed for the Kidney Stone Type Recognition | ['Daniel Flores-Araiza', 'Francisco Lopez-Tiro', 'Clément Larose', 'Salvador Hinojosa', 'Andres Mendez-Vazquez', 'Miguel Gonzalez-Mendoza', 'Gilberto Ochoa-Ruiz', 'Christian Daul'] | ['cs.CV', 'cs.AI'] | The in-vivo identification of the kidney stone types during an ureteroscopy
would be a major medical advance in urology, as it could reduce the time of the
tedious renal calculi extraction process, while diminishing infection risks.
Furthermore, such an automated procedure would make possible to prescribe
anti-recurren... | 2024-09-19T16:27:32Z | Paper submitted to Artificial Intelligence in Medicine. (AIIM),
Elsevier | null | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.