Papers
arxiv:2603.24038

ACAVCaps: Enabling large-scale training for fine-grained and diverse audio understanding

Published on Mar 25
Authors:
,
,
,
,
,
,
,
,

Abstract

A new large-scale, fine-grained audio captioning dataset called ACAVCaps is introduced, derived from ACAV100M and constructed through a multi-expert pipeline analyzing audio from speech, music, and acoustic perspectives before being synthesized into detailed descriptions by a large language model.

AI-generated summary

General audio understanding is a fundamental goal for large audio-language models, with audio captioning serving as a cornerstone task for their development. However, progress in this domain is hindered by existing datasets, which lack the scale and descriptive granularity required to train truly versatile models. To address this gap, we introduce ACAVCaps, a new large-scale, fine-grained, and multi-faceted audio captioning dataset. Derived from the ACAV100M collection, ACAVCaps is constructed using a multi-expert pipeline that analyzes audio from diverse perspectives-including speech, music, and acoustic properties-which are then synthesized into rich, detailed descriptions by a large language model. Experimental results demonstrate that models pre-trained on ACAVCaps exhibit substantially stronger generalization capabilities on various downstream tasks compared to those trained on other leading captioning datasets. The dataset is available at https://github.com/xiaomi-research/acavcaps.

Community

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2603.24038
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 1

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2603.24038 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2603.24038 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.