AtwMaxime commited on
Commit
262f037
·
verified ·
1 Parent(s): c51168f

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +184 -0
README.md ADDED
@@ -0,0 +1,184 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-4.0
3
+ task_categories:
4
+ - audio-classification
5
+ language:
6
+ - en
7
+ tags:
8
+ - speaker-diarization
9
+ - speaker-counting
10
+ - multi-speaker
11
+ - conversation
12
+ - social-scene-understanding
13
+ - ms-swift
14
+ - qwen
15
+ - audio
16
+ size_categories:
17
+ - 10K<n<100K
18
+ ---
19
+
20
+ # VoxConverse — Speaker Diarization in the Wild (MS-Swift Format)
21
+
22
+ This dataset is a reformatted version of [VoxConverse](https://github.com/joonson/voxconverse) for fine-tuning and evaluating multimodal large language models on speaker diarization, packaged in the [MS-Swift](https://github.com/modelscope/ms-swift) Parquet format.
23
+
24
+ > **Note:** The underlying audio is sourced from YouTube videos whose copyright remains with the original owners. This reformatted dataset is intended for research purposes only. For the original annotations and audio, please refer to the [official VoxConverse repository](https://github.com/joonson/voxconverse).
25
+ >
26
+ > **Content notice:** The data consists of political debates and news segments. The views and opinions expressed by speakers do not reflect positions of the original dataset authors, the University of Oxford, Naver Corporation, or the authors of this reformatted version.
27
+ >
28
+ > **Bias notice:** The distribution of identities in this dataset may not be representative of the global human population. Please be careful of unintended societal, gender, racial, linguistic and other biases when training or deploying models trained on this data.
29
+
30
+ **Task:** Given a 30-second audio clip of a multi-speaker conversation, identify who speaks in each half-second bin (diarization), or count the number of distinct speakers.
31
+
32
+ ---
33
+
34
+ ## Dataset Structure
35
+
36
+ ### Columns
37
+
38
+ | Column | Type | Description |
39
+ |---|---|---|
40
+ | `messages` | `list[{role, content}]` | System / user / assistant conversation |
41
+ | `audios` | `list[binary]` | Raw 16kHz mono WAV bytes (30s per clip) |
42
+ | `videos` | `list[binary]` | Empty |
43
+ | `clip_id` | `string` | Source clip identifier for cross-window stitching |
44
+ | `win_start` | `float32` | Window start time in seconds within the source clip |
45
+
46
+ ### Splits
47
+
48
+ | Split | Source | Diarization examples | Speaker count examples |
49
+ |---|---|---|---|
50
+ | train | VoxConverse dev set (216 clips) | 4,543 | 4,543 |
51
+ | test | VoxConverse test set (232 clips) | 10,088 | 10,088 |
52
+
53
+ ---
54
+
55
+ ## Subsets
56
+
57
+ ### `diarization`
58
+
59
+ Given a 30-second clip, output a 60-entry timeline of 0.5-second bins indicating which speaker(s) are active.
60
+
61
+ **System:**
62
+ ```
63
+ You are an expert in speaker diarization.
64
+ Given a 30-second audio clip, identify who speaks in each 0.5-second bin.
65
+ Assign each distinct speaker a letter (A, B, C, ...) in order of first appearance.
66
+ Use '-' for silence and combined letters (e.g. 'AB') for simultaneous speech.
67
+ Provide your answer as a valid JSON object with exactly 60 entries:
68
+ {"timeline": ["A", "A", "AB", "B", "-", ...]}.
69
+ ```
70
+
71
+ **User:**
72
+ ```
73
+ <audio>
74
+ For each of the 60 half-second bins in this clip, indicate which speaker(s)
75
+ are active. Use letters (A, B, ...) in order of first appearance, '-' for silence,
76
+ combined letters (e.g. 'AB') for overlap.
77
+ ```
78
+
79
+ **Assistant:**
80
+ ```json
81
+ {"timeline": ["A", "A", "A", "-", "B", "B", "AB", "B", "A", "A", ...]}
82
+ ```
83
+
84
+ ### `speaker_count`
85
+
86
+ Given a 30-second clip, count the number of distinct speakers.
87
+
88
+ **System:**
89
+ ```
90
+ You are an expert in speaker diarization.
91
+ Given a 30-second audio clip, count the number of distinct speakers present.
92
+ Provide your answer as a valid JSON object: {"num_speakers": N}.
93
+ ```
94
+
95
+ **User:**
96
+ ```
97
+ <audio>
98
+ How many distinct speakers are present in this 30-second clip?
99
+ ```
100
+
101
+ **Assistant:** `{"num_speakers": 3}`
102
+
103
+ ---
104
+
105
+ ## Windowing & Annotation Details
106
+
107
+ - **Window:** 30 seconds, **stride:** 15 seconds (50% overlap)
108
+ - **Bin size:** 0.5 seconds → 60 bins per window
109
+ - **Active threshold:** speaker is active in a bin if their segment overlaps it by > 0.25s (DER forgiveness collar from the original paper)
110
+ - **Speaker normalization:** raw IDs (spk00, spk01, ...) mapped to letters (A, B, ...) in order of first appearance within each window
111
+ - **Audio:** resampled to 16kHz mono WAV
112
+
113
+ ### Cross-window Speaker Stitching (Inference)
114
+
115
+ Since speaker labels are normalized independently per window, inference over a full clip requires stitching. The 50% overlap (15s = 30 shared bins) between consecutive windows allows Hungarian matching:
116
+
117
+ 1. Group rows by `clip_id`, sort by `win_start`
118
+ 2. For each adjacent window pair, build a co-occurrence matrix over the 30 shared bins
119
+ 3. Apply Hungarian algorithm to find the optimal speaker mapping
120
+ 4. Re-label speakers in window N+1 to be consistent with window N
121
+
122
+ ---
123
+
124
+ ## Speaker Distribution (train split)
125
+
126
+ | Speakers per window | Windows |
127
+ |---|---|
128
+ | 1 | 1,364 |
129
+ | 2 | 1,985 |
130
+ | 3 | 818 |
131
+ | 4 | 289 |
132
+ | 5+ | 87 |
133
+
134
+ ---
135
+
136
+ ## Evaluation
137
+
138
+ The diarization task is directly compatible with the standard **Diarisation Error Rate (DER)**:
139
+
140
+ ```
141
+ DER = Missed Speech + False Alarm + Speaker Confusion
142
+ ```
143
+
144
+ Convert predicted timeline back to RTTM segments (each bin = 0.5s) and evaluate with `pyannote.metrics` using a 0.25s forgiveness collar. The paper's audio-only baseline achieves **~20% DER** on the dev set; the best audio-visual method achieves **7.7% DER**.
145
+
146
+ ---
147
+
148
+ ## Usage with MS-Swift
149
+
150
+ ```bash
151
+ # Fine-tuning
152
+ swift sft \
153
+ --model Qwen/Qwen3-Omni-7B \
154
+ --dataset voxconverse-omni/diarization voxconverse-omni/speaker-count \
155
+ --custom_plugin plugins/omni_dataset_plugin.py
156
+
157
+ # Evaluation
158
+ swift eval \
159
+ --model Qwen/Qwen3-Omni-7B \
160
+ --dataset voxconverse-omni/diarization \
161
+ --custom_plugin plugins/omni_dataset_plugin.py \
162
+ --split test
163
+ ```
164
+
165
+ ---
166
+
167
+ ## Source Dataset
168
+
169
+ - **Original dataset:** [VoxConverse](https://github.com/joonson/voxconverse) — Chung et al., 2020
170
+ - 216 dev + 232 test clips from YouTube (political debates, panel discussions, news)
171
+ - Duration: 22s–1097s per clip, avg ~338s (dev) / ~675s (test)
172
+ - Annotations: RTTM format, manually verified, 0.1s boundary precision
173
+ - License: [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/)
174
+
175
+ ## Citation
176
+
177
+ ```bibtex
178
+ @inproceedings{chung2020spot,
179
+ title={Spot the conversation: speaker diarisation in the wild},
180
+ author={Chung, Joon Son and Huh, Jaesung and Nagrani, Arsha and Afouras, Triantafyllos and Zisserman, Andrew},
181
+ booktitle={Interspeech},
182
+ year={2020}
183
+ }
184
+ ```