Datasets:
Modalities:
Text
Formats:
parquet
Size:
1M - 10M
Tags:
speech-recognition
emotion-classication
age-detection
gender-detection
entity-tagging
intent-detection
Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -1,34 +1,214 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
|
| 4 |
-
|
| 5 |
-
|
| 6 |
-
|
| 7 |
-
|
| 8 |
-
|
| 9 |
-
|
| 10 |
-
|
| 11 |
-
-
|
| 12 |
-
|
| 13 |
-
|
| 14 |
-
-
|
| 15 |
-
|
| 16 |
-
|
| 17 |
-
|
| 18 |
-
|
| 19 |
-
|
| 20 |
-
-
|
| 21 |
-
|
| 22 |
-
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
|
| 26 |
-
|
| 27 |
-
|
| 28 |
-
|
| 29 |
-
|
| 30 |
-
|
| 31 |
-
|
| 32 |
-
|
| 33 |
-
|
| 34 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Meta Speech Recognition European Languages Dataset (v1)
|
| 2 |
+
|
| 3 |
+
This dataset contains only the metadata (JSON/Parquet) for European language speech recognition samples.
|
| 4 |
+
**Audio files are NOT included.**
|
| 5 |
+
|
| 6 |
+
## Data Download Links
|
| 7 |
+
|
| 8 |
+
### CommonVoice
|
| 9 |
+
- [CommonVoice Dataset](https://commonvoice.mozilla.org/en/datasets)
|
| 10 |
+
- German (de)
|
| 11 |
+
- English (en)
|
| 12 |
+
- Spanish (es)
|
| 13 |
+
- French (fr)
|
| 14 |
+
- Italian (it)
|
| 15 |
+
- Portuguese (pt)
|
| 16 |
+
|
| 17 |
+
### Multilingual LibriSpeech (MLS)
|
| 18 |
+
- [Multilingual LibriSpeech Dataset](https://www.openslr.org/94/)
|
| 19 |
+
- German: [mls_german.tar.gz](https://dl.fbaipublicfiles.com/mls/mls_german.tar.gz)
|
| 20 |
+
- English: [mls_english.tar.gz](https://dl.fbaipublicfiles.com/mls/mls_english.tar.gz)
|
| 21 |
+
- Spanish: [mls_spanish.tar.gz](https://dl.fbaipublicfiles.com/mls/mls_spanish.tar.gz)
|
| 22 |
+
- French: [mls_french.tar.gz](https://dl.fbaipublicfiles.com/mls/mls_french.tar.gz)
|
| 23 |
+
- Italian: [mls_italian.tar.gz](https://dl.fbaipublicfiles.com/mls/mls_italian.tar.gz)
|
| 24 |
+
- Portuguese: [mls_portuguese.tar.gz](https://dl.fbaipublicfiles.com/mls/mls_portuguese.tar.gz)
|
| 25 |
+
|
| 26 |
+
### People's Speech
|
| 27 |
+
- [People's Speech Dataset](https://huggingface.co/datasets/MLCommons/peoples_speech)
|
| 28 |
+
|
| 29 |
+
## Setup Instructions
|
| 30 |
+
|
| 31 |
+
### 1. Download and Organize Audio Files
|
| 32 |
+
After downloading, organize your audio files as follows:
|
| 33 |
+
- `/cv` for CommonVoice audio (subdirectories by language)
|
| 34 |
+
- `/mls` for Multilingual LibriSpeech audio (subdirectories by language)
|
| 35 |
+
- `/peoplespeech_audio` for People's Speech audio
|
| 36 |
+
|
| 37 |
+
### 2. Convert Parquet Files to NeMo Manifests
|
| 38 |
+
|
| 39 |
+
Create a script `parquet_to_manifest.py`:
|
| 40 |
+
```python
|
| 41 |
+
from datasets import load_dataset
|
| 42 |
+
import json
|
| 43 |
+
import os
|
| 44 |
+
|
| 45 |
+
def convert_to_manifest(dataset, split, output_file):
|
| 46 |
+
with open(output_file, 'w') as f:
|
| 47 |
+
for item in dataset[split]:
|
| 48 |
+
# Ensure paths match your mounted directories
|
| 49 |
+
source, lang = item['source'].split('_')
|
| 50 |
+
if source == 'commonvoice':
|
| 51 |
+
item['audio_filepath'] = os.path.join('/cv', lang, item['audio_filepath'])
|
| 52 |
+
elif source == 'librispeech':
|
| 53 |
+
item['audio_filepath'] = os.path.join('/mls', lang, item['audio_filepath'])
|
| 54 |
+
elif source == 'peoplespeech':
|
| 55 |
+
item['audio_filepath'] = os.path.join('/peoplespeech_audio', item['audio_filepath'])
|
| 56 |
+
|
| 57 |
+
manifest_entry = {
|
| 58 |
+
'audio_filepath': item['audio_filepath'],
|
| 59 |
+
'text': item['text'],
|
| 60 |
+
'duration': item['duration']
|
| 61 |
+
}
|
| 62 |
+
f.write(json.dumps(manifest_entry) + '\n')
|
| 63 |
+
|
| 64 |
+
# Load the dataset from Hugging Face
|
| 65 |
+
dataset = load_dataset("WhissleAI/Meta_STT_EURO_Set1")
|
| 66 |
+
|
| 67 |
+
# Convert each split to manifest
|
| 68 |
+
for split in dataset.keys():
|
| 69 |
+
output_file = f"{split}_manifest.json"
|
| 70 |
+
convert_to_manifest(dataset, split, output_file)
|
| 71 |
+
print(f"Created manifest for {split}: {output_file}")
|
| 72 |
+
```
|
| 73 |
+
|
| 74 |
+
Run the conversion:
|
| 75 |
+
```bash
|
| 76 |
+
python parquet_to_manifest.py
|
| 77 |
+
```
|
| 78 |
+
|
| 79 |
+
This will create manifest files (`train_manifest.json`, `valid_manifest.json`, etc.) in NeMo format.
|
| 80 |
+
|
| 81 |
+
### 3. Pull and Run NeMo Docker
|
| 82 |
+
```bash
|
| 83 |
+
# Pull the NeMo Docker image
|
| 84 |
+
docker pull nvcr.io/nvidia/nemo:24.05
|
| 85 |
+
|
| 86 |
+
# Run the container with GPU support and mounted volumes
|
| 87 |
+
docker run --gpus all -it --rm \
|
| 88 |
+
-v /external1:/external1 \
|
| 89 |
+
-v /external2:/external2 \
|
| 90 |
+
-v /external3:/external3 \
|
| 91 |
+
-v /cv:/cv \
|
| 92 |
+
-v /mls:/mls \
|
| 93 |
+
-v /peoplespeech_audio:/peoplespeech_audio \
|
| 94 |
+
--shm-size=8g \
|
| 95 |
+
-p 8888:8888 -p 6006:6006 \
|
| 96 |
+
--ulimit memlock=-1 \
|
| 97 |
+
--ulimit stack=67108864 \
|
| 98 |
+
--device=/dev/snd \
|
| 99 |
+
nvcr.io/nvidia/nemo:24.05
|
| 100 |
+
```
|
| 101 |
+
|
| 102 |
+
### 4. Fine-tuning Instructions
|
| 103 |
+
|
| 104 |
+
#### A. Create a config file (e.g., `config.yaml`):
|
| 105 |
+
```yaml
|
| 106 |
+
model:
|
| 107 |
+
name: "ConformerCTC"
|
| 108 |
+
pretrained_model: "nvidia/stt_en_conformer_ctc_large" # or your preferred model
|
| 109 |
+
|
| 110 |
+
train_ds:
|
| 111 |
+
manifest_filepath: "train_manifest.json" # Path to the manifest created in step 2
|
| 112 |
+
batch_size: 32
|
| 113 |
+
|
| 114 |
+
validation_ds:
|
| 115 |
+
manifest_filepath: "valid_manifest.json" # Path to the manifest created in step 2
|
| 116 |
+
batch_size: 32
|
| 117 |
+
|
| 118 |
+
optim:
|
| 119 |
+
name: adamw
|
| 120 |
+
lr: 0.001
|
| 121 |
+
|
| 122 |
+
trainer:
|
| 123 |
+
devices: 1
|
| 124 |
+
accelerator: "gpu"
|
| 125 |
+
max_epochs: 100
|
| 126 |
+
```
|
| 127 |
+
|
| 128 |
+
#### B. Start Fine-tuning:
|
| 129 |
+
```bash
|
| 130 |
+
# Inside the NeMo container
|
| 131 |
+
python -m torch.distributed.launch --nproc_per_node=1 \
|
| 132 |
+
examples/asr/speech_to_text_finetune.py \
|
| 133 |
+
--config-path=. \
|
| 134 |
+
--config-name=config.yaml
|
| 135 |
+
```
|
| 136 |
+
|
| 137 |
+
## Dataset Statistics
|
| 138 |
+
|
| 139 |
+
### Splits and Sample Counts
|
| 140 |
+
- **train**: 5140607 samples
|
| 141 |
+
- **valid**: 194933 samples
|
| 142 |
+
- **test**: 208743 samples
|
| 143 |
+
|
| 144 |
+
## Example Samples
|
| 145 |
+
### train
|
| 146 |
+
```json
|
| 147 |
+
{
|
| 148 |
+
"audio_filepath": "/cv/cv-corpus-15.0-2023-09-08/es/clips/common_voice_es_19698530.mp3",
|
| 149 |
+
"text": "Habita en aguas poco profundas y rocosas. AGE_30_45 GER_MALE EMOTION_NEUTRAL INTENT_INFORM",
|
| 150 |
+
"duration": 3.67,
|
| 151 |
+
"source": "commonvoice_es"
|
| 152 |
+
}
|
| 153 |
+
```
|
| 154 |
+
```json
|
| 155 |
+
{
|
| 156 |
+
"audio_filepath": "/cv/cv-corpus-15.0-2023-09-08/es/clips/common_voice_es_19987333.mp3",
|
| 157 |
+
"text": "Opera principalmente vuelos de cabotaje y regionales de carga. AGE_18_30 GER_FEMALE EMOTION_NEUTRAL INTENT_INFORM",
|
| 158 |
+
"duration": 6.86,
|
| 159 |
+
"source": "commonvoice_es"
|
| 160 |
+
}
|
| 161 |
+
```
|
| 162 |
+
|
| 163 |
+
### valid
|
| 164 |
+
```json
|
| 165 |
+
{
|
| 166 |
+
"audio_filepath": "/cv/cv-corpus-15.0-2023-09-08/fr/clips2/common_voice_fr_18031586.mp3",
|
| 167 |
+
"text": "Je vais mordre dans cet oiseau. AGE_45_60 GER_MALE EMOTION_DISGUST INTENT_INFORM",
|
| 168 |
+
"duration": 2.38,
|
| 169 |
+
"source": "commonvoice_fr"
|
| 170 |
+
}
|
| 171 |
+
```
|
| 172 |
+
```json
|
| 173 |
+
{
|
| 174 |
+
"audio_filepath": "/cv/cv-corpus-15.0-2023-09-08/fr/clips2/common_voice_fr_18031602.mp3",
|
| 175 |
+
"text": "L'entrevue fut courte, mais bien affectueuse et bien douloureuse de part et d'autre. AGE_45_60 GER_MALE EMOTION_DISGUST INTENT_INFORM",
|
| 176 |
+
"duration": 5.57,
|
| 177 |
+
"source": "commonvoice_fr"
|
| 178 |
+
}
|
| 179 |
+
```
|
| 180 |
+
|
| 181 |
+
### test
|
| 182 |
+
```json
|
| 183 |
+
{
|
| 184 |
+
"audio_filepath": "/librespeech-en/train-other-500/1646/121408/1646-121408-0038.flac",
|
| 185 |
+
"text": "As I was re conducting, the young man for whom you have asked, he approached the glass door of the gallery, and gazed intently upon some object, doubtless the picture by Raphael, which is opposite the door, he reflected for a second, and then descended the stairs. AGE_30_45 GER_MALE EMOTION_NEU INTENT_DESCRIBE",
|
| 186 |
+
"duration": 14.91,
|
| 187 |
+
"source": "librispeech_en"
|
| 188 |
+
}
|
| 189 |
+
```
|
| 190 |
+
```json
|
| 191 |
+
{
|
| 192 |
+
"audio_filepath": "/librespeech-en/train-other-500/3409/173540/3409-173540-0013.flac",
|
| 193 |
+
"text": "Have suffered so much but my dear child, consult only your own heart. That is all I have to say, and concealing his unvarying emotion. AGE_45_60 GER_FEMALE EMOTION_SAD INTENT_INFORM",
|
| 194 |
+
"duration": 12.47,
|
| 195 |
+
"source": "librispeech_en"
|
| 196 |
+
}
|
| 197 |
+
```
|
| 198 |
+
|
| 199 |
+
|
| 200 |
+
## Usage Notes
|
| 201 |
+
|
| 202 |
+
1. The metadata in this repository contains paths to audio files that must match your local setup.
|
| 203 |
+
2. When fine-tuning, ensure your manifest files use the correct paths for your mounted directories.
|
| 204 |
+
3. For optimal performance:
|
| 205 |
+
- Use a GPU with at least 16GB VRAM
|
| 206 |
+
- Adjust batch size based on your GPU memory
|
| 207 |
+
- Consider gradient accumulation for larger effective batch sizes
|
| 208 |
+
- Monitor training with TensorBoard (accessible via port 6006)
|
| 209 |
+
|
| 210 |
+
## Common Issues and Solutions
|
| 211 |
+
|
| 212 |
+
1. **Path Mismatches**: Ensure audio file paths in manifests match the mounted directories in Docker
|
| 213 |
+
2. **Memory Issues**: Reduce batch size or use gradient accumulation
|
| 214 |
+
3. **Docker Permissions**: Ensure proper permissions for mounted volumes and audio devices
|