gedeonmate commited on
Commit
a058c39
·
verified ·
1 Parent(s): 63aea75

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +106 -0
README.md CHANGED
@@ -53,3 +53,109 @@ configs:
53
  - split: test
54
  path: data/test-*
55
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
53
  - split: test
54
  path: data/test-*
55
  ---
56
+ # 🗣️ LibriConvo-Segmented
57
+
58
+ **LibriConvo-Segmented** is a segmented version of the **LibriConvo** corpus — a **simulated multi-speaker conversational dataset** built using *Speaker-Aware Conversation Simulation (SASC)*.
59
+ It is designed for **training and evaluation of multi-speaker speech processing systems**, including **speaker diarization**, **automatic speech recognition (ASR)**, and **overlapping speech modeling**.
60
+
61
+ This segmented version provides ≤30-second conversational fragments derived from full LibriConvo dialogues, with 40% of them having room impulse responses applied on them. All conversations have 2 speakers.
62
+
63
+ The full paper, detailing the creation of the corpus, as well as baseline ASR and diarization results can be found here: https://arxiv.org/abs/2510.23320
64
+
65
+ ---
66
+
67
+ ## 🧠 Overview
68
+
69
+ **LibriConvo** ensures **natural conversational flow** and **contextual coherence** by:
70
+
71
+ - Organizing LibriTTS utterances by **book** to maintain narrative continuity.
72
+ - Using statistics from **CallHome** for pause modeling.
73
+ - Applying **compression** to remove excessively long silences while preserving turn dynamics.
74
+ - Enhancing **acoustic realism** via a novel **Room Impulse Response (RIR) selection procedure**, ranking configurations by spatial plausibility.
75
+ - Producing **speaker-disjoint splits** for robust evaluation and generalization.
76
+
77
+ In total, the full LibriConvo corpus comprises **240.1 hours** across **1,496 dialogues** with **830 unique speakers**.
78
+ This segmented release provides **shorter, self-contained audio clips** suitable for fine-tuning ASR and diarization models.
79
+
80
+ ---
81
+
82
+ ## 📦 Dataset Summary
83
+
84
+ | Split | # Examples |
85
+ |:------|------------:|
86
+ | Train | 138,683 |
87
+ | Validation | 15,893 |
88
+ | Test | 16,030 |
89
+
90
+ **Total size:** ≈ 145 GB
91
+ **Sampling rate:** 16 kHz
92
+ **Audio format:** WAV (mono)
93
+ **Split criterion:** Speaker-disjoint
94
+
95
+ ---
96
+
97
+ ## 📂 Data Structure
98
+
99
+ Each row represents a single speech segment belonging to a simulated conversation between two speakers.
100
+
101
+ | Field | Type | Description |
102
+ |:------|:----:|:------------|
103
+ | `conversation_id` | string | Conversation identifier |
104
+ | `utterance_idx` | int64 | Utterance index within the conversation |
105
+ | `abstract_symbol` | string | High-level symbolic utterance ID ('A' or 'B') |
106
+ | `transcript` | string | Text transcription of the utterance |
107
+ | `duration_sec` | float64 | Segment duration (seconds) |
108
+ | `rir_file` | string | Room impulse response file used |
109
+ | `delay_sec` | float64 | Delay applied for realistic speaker overlap |
110
+ | `start_time_sec`, `end_time_sec` | float64 | Start and end times within the conversation |
111
+ | `abs_start_time_sec`, `abs_end_time_sec` | float64 | Global (absolute) start and end times |
112
+ | `segment_id` | int64 | Local segment index |
113
+ | `segment_conversation_id` | string | Unique segment identifier |
114
+ | `split` | string | One of `train`, `validation`, or `test` |
115
+ | `audio` | Audio (16 kHz) | Decoded audio data |
116
+ ---
117
+
118
+ ## 🚀 Loading the Dataset
119
+
120
+ ```python
121
+ from datasets import load_dataset
122
+
123
+ ds = load_dataset("gedeonmate/LibriConvo-segmented")
124
+
125
+ print(ds)
126
+ # DatasetDict({
127
+ # train: Dataset(...),
128
+ # validation: Dataset(...),
129
+ # test: Dataset(...)
130
+ # })
131
+ ```
132
+
133
+ ---
134
+
135
+ 📚 Citation
136
+
137
+ If you use the LibriConvo dataset or the associated Speaker-Aware Conversation Simulation (SASC) methodology in your research, please cite the following papers:
138
+
139
+ ```
140
+ @misc{gedeon2025libriconvo,
141
+ title = {LibriConvo: Simulating Conversations from Read Literature for ASR and Diarization},
142
+ author = {Máté Gedeon and Péter Mihajlik},
143
+ year = {2025},
144
+ eprint = {2510.23320},
145
+ archivePrefix = {arXiv},
146
+ primaryClass = {eess.AS},
147
+ url = {https://arxiv.org/abs/2510.23320}
148
+ }
149
+ ```
150
+
151
+ ```
152
+ @misc{gedeon2025sasc,
153
+ title={From Independence to Interaction: Speaker-Aware Simulation of Multi-Speaker Conversational Timing},
154
+ author={Máté Gedeon and Péter Mihajlik},
155
+ year={2025},
156
+ eprint={2509.15808},
157
+ archivePrefix={arXiv},
158
+ primaryClass={cs.SD},
159
+ url={https://arxiv.org/abs/2509.15808},
160
+ }
161
+ ```