fttrtest / README.md
Codyfederer's picture
Upload dataset as Parquet (1 files, 1293 records)
f7c8e84 verified
---
license: cc-by-4.0
task_categories:
- automatic-speech-recognition
- text-to-speech
language:
- tr
tags:
- speech
- audio
- dataset
- tts
- asr
- merged-dataset
size_categories:
- 1K<n<10K
configs:
- config_name: default
data_files:
- split: train
path: "train-*.parquet"
default: true
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: null
- name: text
dtype: string
- name: speaker_id
dtype: string
- name: language
dtype: string
- name: emotion
dtype: string
- name: original_dataset
dtype: string
- name: original_filename
dtype: string
- name: start_time
dtype: float32
- name: end_time
dtype: float32
- name: duration
dtype: float32
splits:
- name: train
num_examples: 1293
config_name: default
---
# FTTRTEST
This is a merged speech dataset containing 1293 audio segments from 5 source datasets.
## Dataset Information
- **Total Segments**: 1293
- **Speakers**: 7
- **Languages**: tr
- **Emotions**: happy, angry, neutral
- **Original Datasets**: 5
## Dataset Structure
Each example contains:
- `audio`: Audio file (WAV format, original sampling rate preserved)
- `text`: Transcription of the audio
- `speaker_id`: Unique speaker identifier (made unique across all merged datasets)
- `language`: Language code (en, es, fr, etc.)
- `emotion`: Detected emotion (neutral, happy, sad, etc.)
- `original_dataset`: Name of the source dataset this segment came from
- `original_filename`: Original filename in the source dataset
- `start_time`: Start time of the segment in seconds
- `end_time`: End time of the segment in seconds
- `duration`: Duration of the segment in seconds
## Usage
### Loading the Dataset
```python
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("Codyfederer/fttrtest")
# Access the training split
train_data = dataset["train"]
# Example: Get first sample
sample = train_data[0]
print(f"Text: {sample['text']}")
print(f"Speaker: {sample['speaker_id']}")
print(f"Language: {sample['language']}")
print(f"Emotion: {sample['emotion']}")
print(f"Original Dataset: {sample['original_dataset']}")
print(f"Duration: {sample['duration']}s")
# Play audio (requires audio libraries)
# sample['audio']['array'] contains the audio data
# sample['audio']['sampling_rate'] contains the sampling rate
```
### Alternative: Load from JSONL
```python
from datasets import Dataset, Audio, Features, Value
import json
# Load the JSONL file
rows = []
with open("data.jsonl", "r", encoding="utf-8") as f:
for line in f:
rows.append(json.loads(line))
features = Features({
"audio": Audio(sampling_rate=None),
"text": Value("string"),
"speaker_id": Value("string"),
"language": Value("string"),
"emotion": Value("string"),
"original_dataset": Value("string"),
"original_filename": Value("string"),
"start_time": Value("float32"),
"end_time": Value("float32"),
"duration": Value("float32")
})
dataset = Dataset.from_list(rows, features=features)
```
### Dataset Structure
The dataset includes:
- `data.jsonl` - Main dataset file with all columns (JSON Lines)
- `*.wav` - Audio files under `audio_XXX/` subdirectories
- `load_dataset.txt` - Python script for loading the dataset (rename to .py to use)
JSONL keys:
- `audio`: Relative audio path (e.g., `audio_000/segment_000000_speaker_0.wav`)
- `text`: Transcription of the audio
- `speaker_id`: Unique speaker identifier
- `language`: Language code
- `emotion`: Detected emotion
- `original_dataset`: Name of the source dataset
- `original_filename`: Original filename in the source dataset
- `start_time`: Start time of the segment in seconds
- `end_time`: End time of the segment in seconds
- `duration`: Duration of the segment in seconds
## Speaker ID Mapping
Speaker IDs have been made unique across all merged datasets to avoid conflicts.
For example:
- Original Dataset A: `speaker_0`, `speaker_1`
- Original Dataset B: `speaker_0`, `speaker_1`
- Merged Dataset: `speaker_0`, `speaker_1`, `speaker_2`, `speaker_3`
Original dataset information is preserved in the metadata for reference.
## Data Quality
This dataset was created using the Vyvo Dataset Builder with:
- Automatic transcription and diarization
- Quality filtering for audio segments
- Music and noise filtering
- Emotion detection
- Language identification
## License
This dataset is released under the Creative Commons Attribution 4.0 International License (CC BY 4.0).
## Citation
```bibtex
@dataset{vyvo_merged_dataset,
title={FTTRTEST},
author={Vyvo Dataset Builder},
year={2025},
url={https://huggingface.co/datasets/Codyfederer/fttrtest}
}
```
This dataset was created using the Vyvo Dataset Builder tool.