gedeonmate's picture
Fix reverb tails (part 00001-of-00002)
8003ae5 verified
metadata
dataset_info:
  features:
    - name: audio
      dtype:
        audio:
          sampling_rate: 16000
    - name: conversation_id
      dtype: string
    - name: split
      dtype: string
    - name: utterance_idx
      sequence: int64
    - name: abstract_symbol
      sequence: string
    - name: start_time
      sequence: float64
    - name: end_time
      sequence: float64
    - name: abs_start_time
      sequence: float64
    - name: abs_end_time
      sequence: float64
    - name: text
      sequence: string
    - name: duration_sec
      sequence: float64
    - name: segment_id
      dtype: int64
    - name: segment_conversation_id
      dtype: string
    - name: rir
      dtype: bool
  splits:
    - name: train
      num_bytes: 25575970863.525
      num_examples: 30313
    - name: validation
      num_bytes: 3028603290.34
      num_examples: 3595
    - name: test
      num_bytes: 3133192896.73
      num_examples: 3674
  download_size: 29252180615
  dataset_size: 31737767050.595
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: validation
        path: data/validation-*
      - split: test
        path: data/test-*
license: cc
task_categories:
  - automatic-speech-recognition
language:
  - en
tags:
  - diarization
  - asr

🗣️ LibriConvo-Segmented

LibriConvo-Segmented is a segmented version of the LibriConvo corpus — a simulated two-speaker conversational dataset built using Speaker-Aware Conversation Simulation (SASC).
It is designed for training and evaluation of multi-speaker speech processing systems, including speaker diarization, automatic speech recognition (ASR), and overlapping speech modeling.

This segmented version provides ≤30-second conversational fragments derived from full LibriConvo dialogues, with 40% of them having room impulse responses applied on them.

The full paper, detailing the creation of the corpus, as well as baseline ASR and diarization results can be found here: https://arxiv.org/abs/2510.23320


🧠 Overview

LibriConvo ensures natural conversational flow and contextual coherence by:

  • Organizing LibriTTS utterances by book to maintain narrative continuity.
  • Using statistics from CallHome for pause modeling.
  • Applying compression to remove excessively long silences while preserving turn dynamics.
  • Enhancing acoustic realism via a novel Room Impulse Response (RIR) selection procedure, ranking configurations by spatial plausibility.
  • Producing speaker-disjoint splits for robust evaluation and generalization.

In total, the full LibriConvo corpus comprises 240.1 hours across 1,496 dialogues with 830 unique speakers.
This segmented release provides shorter, self-contained audio clips suitable for fine-tuning ASR and diarization models.


📦 Dataset Summary

Split # Segments
Train 30,313
Validation 3,595
Test 3674

Sampling rate: 16 kHz
Audio format: WAV (mono)
Split criterion: Speaker-disjoint


📂 Data Structure

Each row represents a single speech segment belonging to a simulated conversation between two speakers.

Field Type Description
conversation_id string Conversation identifier
utterance_idx int64 Utterance index within the conversation
abstract_symbol string High-level symbolic utterance ID ('A' or 'B')
transcript string Text transcription of the utterance
duration_sec float64 Segment duration (seconds)
rir_file string Room impulse response file used
delay_sec float64 Delay applied for realistic speaker overlap
start_time_sec, end_time_sec float64 Start and end times within the conversation
abs_start_time_sec, abs_end_time_sec float64 Global (absolute) start and end times
segment_id int64 Local segment index
segment_conversation_id string Unique segment identifier
split string One of train, validation, or test
audio Audio (16 kHz) Decoded audio data

🚀 Loading the Dataset

from datasets import load_dataset

ds = load_dataset("gedeonmate/LibriConvo-segmented")

print(ds)
# DatasetDict({
#     train: Dataset(...),
#     validation: Dataset(...),
#     test: Dataset(...)
# })

📚 Citation

If you use the LibriConvo dataset or the associated Speaker-Aware Conversation Simulation (SASC) methodology in your research, please cite the following papers:

@misc{gedeon2025libriconvo,
  title         = {LibriConvo: Simulating Conversations from Read Literature for ASR and Diarization},
  author        = {Máté Gedeon and Péter Mihajlik},
  year          = {2025},
  eprint        = {2510.23320},
  archivePrefix = {arXiv},
  primaryClass  = {eess.AS},
  url           = {https://arxiv.org/abs/2510.23320}
}
@misc{gedeon2025sasc,
      title={From Independence to Interaction: Speaker-Aware Simulation of Multi-Speaker Conversational Timing}, 
      author={Máté Gedeon and Péter Mihajlik},
      year={2025},
      eprint={2509.15808},
      archivePrefix={arXiv},
      primaryClass={cs.SD},
      url={https://arxiv.org/abs/2509.15808}, 
}