--- license: cc-by-4.0 task_categories: - audio-classification language: - en tags: - speaker-diarization - speaker-counting - multi-speaker - conversation - social-scene-understanding - ms-swift - qwen - audio size_categories: - 10K **Note:** The underlying audio is sourced from YouTube videos whose copyright remains with the original owners. This reformatted dataset is intended for research purposes only. For the original annotations and audio, please refer to the [official VoxConverse repository](https://github.com/joonson/voxconverse). > > **Content notice:** The data consists of political debates and news segments. The views and opinions expressed by speakers do not reflect positions of the original dataset authors, the University of Oxford, Naver Corporation, or the authors of this reformatted version. > > **Bias notice:** The distribution of identities in this dataset may not be representative of the global human population. Please be careful of unintended societal, gender, racial, linguistic and other biases when training or deploying models trained on this data. **Task:** Given a 30-second audio clip of a multi-speaker conversation, identify who speaks in each half-second bin (diarization), or count the number of distinct speakers. --- ## Dataset Structure ### Columns | Column | Type | Description | |---|---|---| | `messages` | `list[{role, content}]` | System / user / assistant conversation | | `audios` | `list[binary]` | Raw 16kHz mono WAV bytes (30s per clip) | | `videos` | `list[binary]` | Empty | | `clip_id` | `string` | Source clip identifier for cross-window stitching | | `win_start` | `float32` | Window start time in seconds within the source clip | ### Splits | Split | Source | Diarization examples | Speaker count examples | |---|---|---|---| | train | VoxConverse dev set (216 clips) | 4,543 | 4,543 | | test | VoxConverse test set (232 clips) | 10,088 | 10,088 | --- ## Subsets ### `diarization` Given a 30-second clip, output a 60-entry timeline of 0.5-second bins indicating which speaker(s) are active. **System:** ``` You are an expert in speaker diarization. Given a 30-second audio clip, identify who speaks in each 0.5-second bin. Assign each distinct speaker a letter (A, B, C, ...) in order of first appearance. Use '-' for silence and combined letters (e.g. 'AB') for simultaneous speech. Provide your answer as a valid JSON object with exactly 60 entries: {"timeline": ["A", "A", "AB", "B", "-", ...]}. ``` **User:** ```