| # sdfdsf | |
| This is a merged speech dataset containing 345 audio segments. | |
| ## Dataset Information | |
| - **Total Segments**: 345 | |
| - **Speakers**: 7 | |
| - **Languages**: en | |
| - **Emotions**: happy, sad, neutral, angry | |
| - **Original Datasets**: 2 | |
| ## Dataset Structure | |
| Each example contains: | |
| - `audio`: Audio file (WAV format, 16kHz sampling rate) | |
| - `text`: Transcription of the audio | |
| - `speaker_id`: Unique speaker identifier | |
| - `emotion`: Detected emotion (neutral, happy, sad, etc.) | |
| - `language`: Language code (en, es, fr, etc.) | |
| ## Usage | |
| ```python | |
| from datasets import load_dataset | |
| dataset = load_dataset("Codyfederer/sdfdsf") | |
| ``` | |
| ## Speaker ID Mapping | |
| Speaker IDs have been made unique across all merged datasets to avoid conflicts. | |
| Original dataset information is preserved in the metadata. | |
| ## Citation | |
| This dataset was created using the Vyvo Dataset Builder tool. | |