|
|
--- |
|
|
tags: |
|
|
- speech |
|
|
- speech-transcription |
|
|
- romanian |
|
|
language: |
|
|
- ro |
|
|
license: mit |
|
|
task_categories: |
|
|
- automatic-speech-recognition |
|
|
- audio-classification |
|
|
- text-to-speech |
|
|
- text-to-audio |
|
|
pretty_name: RO_CV20 |
|
|
size_categories: |
|
|
- 10K<n<100K |
|
|
--- |
|
|
# Common Voices Corpus 20.0 (Romanian) |
|
|
<h5 style="font-family: 'Calibri'; margin-bottom: 20px;"> |
|
|
Common Voices is an open-source dataset of speech recordings created by |
|
|
<a href="https://commonvoice.mozilla.org" target="_blank">Mozilla</a> to improve speech recognition technologies. |
|
|
It consists of crowdsourced voice samples in multiple languages, contributed by volunteers worldwide. |
|
|
</h5> |
|
|
|
|
|
<h5 style="font-family: 'Calibri'; margin-bottom: 20px;"> |
|
|
<strong>Challenges:</strong> The raw dataset included numerous recordings with incorrect transcriptions |
|
|
or those requiring adjustments, such as sampling rate modifications, conversion to .wav format, and other refinements |
|
|
essential for optimal use in developing and fine-tuning various models. |
|
|
</h5> |
|
|
|
|
|
<h5 style="font-family: 'Calibri'; margin-bottom: 20px;"> |
|
|
<strong>Processing:</strong> Our team, led by project manager Ionuț Vișan, carefully reviewed and manually corrected the |
|
|
transcriptions of all audio segments, ensuring their conversion into the required format for modern models |
|
|
(16k Hz sampling rate, mono channel, .wav format). |
|
|
</h5> |
|
|
|
|
|
--- |
|
|
<h2>Dataset Summary<h2> |
|
|
|
|
|
<h5 style="font-family: 'Calibri'; margin-bottom: 2px;"> |
|
|
<strong>common_voices20_audio.zip: </strong> The archive containing all processed audio segments. |
|
|
</h5> |
|
|
<h5 style="font-family: 'Calibri'; margin-bottom: 2px;"> |
|
|
Total number of audio segments: <strong>41,431</strong>. |
|
|
</h5> |
|
|
<h5 style="font-family: 'Calibri'; margin-bottom: 20px;"> |
|
|
Total duration of all audio segments combined: approximately <strong>47 hours</strong>. |
|
|
</h5> |
|
|
|
|
|
<h5 style="font-family: 'Calibri'; margin-bottom: 2px;"> |
|
|
<strong>common_voices20.csv: </strong> Contains metadata for all segments from the common_voices20_audio. |
|
|
</h5> |
|
|
<h5 style="font-family: 'Calibri'; margin-bottom: 2px;"> |
|
|
The file contains 41,431 rows and 2 columns: |
|
|
</h5> |
|
|
<h5 style="font-family: 'Calibri'; margin-bottom: 2px;"> |
|
|
<ul> |
|
|
<li><em>audio_file</em>: File names of the processed audio segments from common_voices20_audio.</li> |
|
|
<li><em>transcript</em>: Corresponding text transcriptions for each audio file from common_voices20_audio.</li> |
|
|
</ul> |
|
|
</h5> |
|
|
|
|
|
--- |
|
|
<h2>Split<h2> |
|
|
<h5 style="font-family: 'Calibri'; margin-bottom: 2px;"> |
|
|
To split the dataset (common_voices20.csv), we performed an 80-20 split into training and test sets using a seed value of 42, resulting in: |
|
|
</h5> |
|
|
<h5 style="font-family: 'Calibri'; margin-bottom: 2px;"> |
|
|
<ul> |
|
|
<li><em>train_common_voices20.csv</em>: It contains 33,144 of the audio segments.</li> |
|
|
<li><em>test_common_voices20.csv</em>: It contains 8,287 of the audio segments.</li> |
|
|
</ul> |
|
|
</h5> |
|
|
|
|
|
--- |
|
|
<h2>How to use<h2> |
|
|
|
|
|
<h5 style="font-family: 'Calibri'; margin-bottom: 20px;"> |
|
|
<strong></strong>If you want to download all the files from the dataset, use the following code: |
|
|
</h5> |
|
|
|
|
|
<details> |
|
|
<summary><strong>Click to expand the code</strong></summary> |
|
|
|
|
|
```python |
|
|
from huggingface_hub import hf_hub_download |
|
|
import zipfile |
|
|
import os |
|
|
|
|
|
# Repo and files |
|
|
Dataset = "TransferRapid/CommonVoices20_ro" |
|
|
|
|
|
filenames = [ |
|
|
"common_voices20.csv", |
|
|
"test_common_voices20.csv", |
|
|
"train_common_voices20.csv", |
|
|
"common_voices20_audio.zip" |
|
|
] |
|
|
|
|
|
# Download files |
|
|
for filename in filenames: |
|
|
print(f"Downloading {filename}...") |
|
|
file_path = hf_hub_download(repo_id=Dataset, filename=filename, repo_type="dataset", local_dir="./") |
|
|
print(f"Downloaded {filename} to: {file_path}") |
|
|
|
|
|
# Extract ZIP files |
|
|
if filename.endswith('.zip'): |
|
|
extracted_dir = filename.replace('.zip', '') |
|
|
with zipfile.ZipFile(file_path, 'r') as zip_ref: |
|
|
zip_ref.extractall(extracted_dir) |
|
|
print(f"Extracted files to: {extracted_dir}") |
|
|
print(os.listdir(extracted_dir)) |
|
|
else: |
|
|
print(f"{filename} is available.") |
|
|
|
|
|
``` |
|
|
</details> |
|
|
|
|
|
--- |
|
|
<h2>Usage<h2> |
|
|
<h5 style="font-family: 'Calibri'; margin-bottom: 2px;"> |
|
|
The dataset can be used for: |
|
|
</h5> |
|
|
<h5 style="font-family: 'Calibri'; margin-bottom: 2px;"> |
|
|
<ul> |
|
|
<li><em>Speech-to-Text (STT) – Automatic Transcription</em></li> |
|
|
<li><em>Text-to-Speech (TTS) – Synthetic Voice Generation</em></li> |
|
|
<li><em>Speech Enhancement & Noise Reduction</em></li> |
|
|
<li><em>Speaker Recognition & Verification</em></li> |
|
|
<li><em>Sentiment Analysis & Emotion Recognition</em></li> |
|
|
<li><em>AI-Powered Voice Assistants & Smart Devices</em></li> |
|
|
</ul> |
|
|
</h5> |
|
|
|
|
|
--- |
|
|
<h2>Communication<h2> |
|
|
<h5 style="font-family: 'Calibri'; margin-bottom: 2px;"> |
|
|
For any questions regarding this dataset or to explore collaborations on ambitious AI/ML projects, please feel free to contact us at: |
|
|
</h5> |
|
|
<h5 style="font-family: 'Calibri'; margin-bottom: 2px;"> |
|
|
<ul> |
|
|
<li><em>ionut.visan@transferrapid.com</em></li> |
|
|
<li><em><a href="https://www.linkedin.com/in/ionut-visan/" target="_blank">Ionuț Vișan's Linkedin</a></em></li> |
|
|
<li><em><a href="https://www.linkedin.com/company/transfer-rapid" target="_blank">Transfer Rapid's Linkedin</a></em></li> |
|
|
</ul> |
|
|
</h5> |
|
|
</ul> |
|
|
</h5> |