Papers
arxiv:2602.07960

D-ORCA: Dialogue-Centric Optimization for Robust Audio-Visual Captioning

Published on Feb 8
Authors:
,
,
,
,

Abstract

D-ORCA is a dialogue-centric omni-modal large language model for robust audio-visual captioning that uses group relative policy optimization with novel reward functions for speaker attribution, speech content accuracy, and temporal boundary alignment.

AI-generated summary

Spoken dialogue is a primary source of information in videos; therefore, accurately identifying who spoke what and when is essential for deep video understanding. We introduce D-ORCA, a dialogue-centric omni-modal large language model optimized for robust audio-visual captioning. We further curate DVD, a large-scale, high-quality bilingual dataset comprising nearly 40,000 multi-party dialogue videos for training and 2000 videos for evaluation in English and Mandarin, addressing a critical gap in the open-source ecosystem. To ensure fine-grained captioning accuracy, we adopt group relative policy optimization with three novel reward functions that assess speaker attribution accuracy, global speech content accuracy, and sentence-level temporal boundary alignment. These rewards are derived from evaluation metrics widely used in speech processing and, to our knowledge, are applied for the first time as reinforcement learning objectives for audio-visual captioning. Extensive experiments demonstrate that D-ORCA substantially outperforms existing open-source models in speaker identification, speech recognition, and temporal grounding. Notably, despite having only 8 billion parameters, D-ORCA achieves performance competitive with Qwen3-Omni across several general-purpose audio-visual understanding benchmarks. Demos are available at https://d-orca-llm.github.io/{https://d-orca-llm.github.io/}. Our code, data, and checkpoints will be available at https://github.com/WeChatCV/D-ORCA/{https://github.com/WeChatCV/D-ORCA/}.

Community

Sign up or log in to comment

Models citing this paper 1

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2602.07960 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2602.07960 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.