spc_r_segmented / README.md
eko57's picture
Update README.md
1b29b62 verified
metadata
dataset_info:
  features:
    - name: id
      dtype: string
    - name: duration
      dtype: float64
    - name: audio
      dtype: audio
    - name: text
      dtype: string
language:
  - de
task_categories:
  - automatic-speech-recognition
source_datasets:
  - i4ds/spc_r
license: cc-by-4.0

eko57/spc_r_segmented

Diarized and segmented speech dataset derived from i4ds/spc_r.

Description

Each row is a merged speech segment belonging to a single speaker. The source audio and SRT subtitles from i4ds/spc_r were processed with the following pipeline:

  1. Diarization -- pyannote/speaker-diarization-3.1 assigned speaker labels to each SRT segment based on temporal overlap.
  2. Merging -- Consecutive SRT segments from the same speaker were merged when the silence gap between them was below a threshold (default 1.0s) and the resulting duration stayed within bounds (default 10--20s).
  3. Slicing -- The merged time ranges were used to slice the original audio waveform. Each segment is encoded as FLAC.

Columns

Column Type Description
id string Unique identifier (row{NNNNN}_seg{NNN})
duration float64 Segment duration in seconds
audio audio FLAC audio for the segment
text string Merged transcript text from the SRT segments

Usage

from datasets import load_dataset

ds = load_dataset("eko57/spc_r_segmented")
print(ds["train"][0])