See, Hear, and Understand: Benchmarking Audiovisual Human Speech Understanding in Multimodal Large Language Models
Paper • 2512.02231 • Published • 9
video1 video | video2 video | label int32 |
|---|---|---|
0 | ||
1 | ||
0 | ||
0 | ||
0 | ||
0 | ||
0 | ||
0 | ||
1 | ||
0 | ||
0 | ||
0 | ||
0 | ||
1 | ||
0 | ||
0 | ||
0 | ||
1 | ||
1 | ||
1 | ||
1 | ||
0 | ||
0 | ||
0 | ||
1 | ||
0 | ||
0 | ||
1 | ||
0 | ||
0 | ||
0 | ||
0 | ||
0 | ||
1 | ||
1 | ||
0 | ||
1 | ||
1 | ||
0 | ||
0 | ||
1 | ||
0 | ||
0 | ||
0 | ||
1 | ||
1 | ||
0 | ||
0 | ||
0 | ||
0 | ||
0 | ||
1 | ||
0 | ||
0 | ||
1 | ||
1 | ||
1 | ||
0 | ||
0 | ||
0 | ||
1 | ||
1 | ||
1 | ||
0 | ||
1 | ||
1 | ||
0 | ||
0 | ||
0 | ||
0 | ||
0 | ||
1 | ||
1 | ||
1 | ||
1 | ||
1 | ||
0 | ||
1 | ||
0 | ||
0 | ||
1 | ||
1 | ||
1 | ||
1 | ||
1 | ||
0 | ||
0 | ||
1 | ||
0 | ||
1 | ||
0 | ||
1 | ||
0 | ||
0 | ||
1 | ||
0 | ||
0 | ||
1 | ||
0 | ||
0 |
Video pair classification dataset from AV-SpeakerBench (Nguyen et al., 2025).
Each row contains two video clips (video1, video2) and a binary label indicating whether they come from the same source video (same speaker context) or different source videos (different speakers).
Source: mteb/AV-SpeakerBench — deduplicated by video_id, 2048 balanced pairs generated from YouTube source video IDs.