Datasets:
Modalities:
Text
Formats:
parquet
Size:
1M - 10M
Tags:
speech-recognition
emotion-classication
age-detection
gender-detection
entity-tagging
intent-detection
Update README.md
Browse files
README.md
CHANGED
|
@@ -48,113 +48,6 @@ This dataset contains only the metadata (JSON/Parquet) for European language spe
|
|
| 48 |
### People's Speech
|
| 49 |
- [People's Speech Dataset](https://huggingface.co/datasets/MLCommons/peoples_speech)
|
| 50 |
|
| 51 |
-
## Setup Instructions
|
| 52 |
-
|
| 53 |
-
### 1. Download and Organize Audio Files
|
| 54 |
-
After downloading, organize your audio files as follows:
|
| 55 |
-
- `/cv` for CommonVoice audio (subdirectories by language)
|
| 56 |
-
- `/mls` for Multilingual LibriSpeech audio (subdirectories by language)
|
| 57 |
-
- `/peoplespeech_audio` for People's Speech audio
|
| 58 |
-
|
| 59 |
-
### 2. Convert Parquet Files to NeMo Manifests
|
| 60 |
-
|
| 61 |
-
Create a script `parquet_to_manifest.py`:
|
| 62 |
-
```python
|
| 63 |
-
from datasets import load_dataset
|
| 64 |
-
import json
|
| 65 |
-
import os
|
| 66 |
-
|
| 67 |
-
def convert_to_manifest(dataset, split, output_file):
|
| 68 |
-
with open(output_file, 'w') as f:
|
| 69 |
-
for item in dataset[split]:
|
| 70 |
-
# Ensure paths match your mounted directories
|
| 71 |
-
source, lang = item['source'].split('_')
|
| 72 |
-
if source == 'commonvoice':
|
| 73 |
-
item['audio_filepath'] = os.path.join('/cv', lang, item['audio_filepath'])
|
| 74 |
-
elif source == 'librispeech':
|
| 75 |
-
item['audio_filepath'] = os.path.join('/mls', lang, item['audio_filepath'])
|
| 76 |
-
elif source == 'peoplespeech':
|
| 77 |
-
item['audio_filepath'] = os.path.join('/peoplespeech_audio', item['audio_filepath'])
|
| 78 |
-
|
| 79 |
-
manifest_entry = {
|
| 80 |
-
'audio_filepath': item['audio_filepath'],
|
| 81 |
-
'text': item['text'],
|
| 82 |
-
'duration': item['duration']
|
| 83 |
-
}
|
| 84 |
-
f.write(json.dumps(manifest_entry) + '\n')
|
| 85 |
-
|
| 86 |
-
# Load the dataset from Hugging Face
|
| 87 |
-
dataset = load_dataset("WhissleAI/Meta_STT_EURO_Set1")
|
| 88 |
-
|
| 89 |
-
# Convert each split to manifest
|
| 90 |
-
for split in dataset.keys():
|
| 91 |
-
output_file = f"{split}_manifest.json"
|
| 92 |
-
convert_to_manifest(dataset, split, output_file)
|
| 93 |
-
print(f"Created manifest for {split}: {output_file}")
|
| 94 |
-
```
|
| 95 |
-
|
| 96 |
-
Run the conversion:
|
| 97 |
-
```bash
|
| 98 |
-
python parquet_to_manifest.py
|
| 99 |
-
```
|
| 100 |
-
|
| 101 |
-
This will create manifest files (`train_manifest.json`, `valid_manifest.json`, etc.) in NeMo format.
|
| 102 |
-
|
| 103 |
-
### 3. Pull and Run NeMo Docker
|
| 104 |
-
```bash
|
| 105 |
-
# Pull the NeMo Docker image
|
| 106 |
-
docker pull nvcr.io/nvidia/nemo:24.05
|
| 107 |
-
|
| 108 |
-
# Run the container with GPU support and mounted volumes
|
| 109 |
-
docker run --gpus all -it --rm \
|
| 110 |
-
-v /external1:/external1 \
|
| 111 |
-
-v /external2:/external2 \
|
| 112 |
-
-v /external3:/external3 \
|
| 113 |
-
-v /cv:/cv \
|
| 114 |
-
-v /mls:/mls \
|
| 115 |
-
-v /peoplespeech_audio:/peoplespeech_audio \
|
| 116 |
-
--shm-size=8g \
|
| 117 |
-
-p 8888:8888 -p 6006:6006 \
|
| 118 |
-
--ulimit memlock=-1 \
|
| 119 |
-
--ulimit stack=67108864 \
|
| 120 |
-
--device=/dev/snd \
|
| 121 |
-
nvcr.io/nvidia/nemo:24.05
|
| 122 |
-
```
|
| 123 |
-
|
| 124 |
-
### 4. Fine-tuning Instructions
|
| 125 |
-
|
| 126 |
-
#### A. Create a config file (e.g., `config.yaml`):
|
| 127 |
-
```yaml
|
| 128 |
-
model:
|
| 129 |
-
name: "ConformerCTC"
|
| 130 |
-
pretrained_model: "nvidia/stt_en_conformer_ctc_large" # or your preferred model
|
| 131 |
-
|
| 132 |
-
train_ds:
|
| 133 |
-
manifest_filepath: "train_manifest.json" # Path to the manifest created in step 2
|
| 134 |
-
batch_size: 32
|
| 135 |
-
|
| 136 |
-
validation_ds:
|
| 137 |
-
manifest_filepath: "valid_manifest.json" # Path to the manifest created in step 2
|
| 138 |
-
batch_size: 32
|
| 139 |
-
|
| 140 |
-
optim:
|
| 141 |
-
name: adamw
|
| 142 |
-
lr: 0.001
|
| 143 |
-
|
| 144 |
-
trainer:
|
| 145 |
-
devices: 1
|
| 146 |
-
accelerator: "gpu"
|
| 147 |
-
max_epochs: 100
|
| 148 |
-
```
|
| 149 |
-
|
| 150 |
-
#### B. Start Fine-tuning:
|
| 151 |
-
```bash
|
| 152 |
-
# Inside the NeMo container
|
| 153 |
-
python -m torch.distributed.launch --nproc_per_node=1 \
|
| 154 |
-
examples/asr/speech_to_text_finetune.py \
|
| 155 |
-
--config-path=. \
|
| 156 |
-
--config-name=config.yaml
|
| 157 |
-
```
|
| 158 |
|
| 159 |
## Dataset Statistics
|
| 160 |
|
|
@@ -216,21 +109,4 @@ python -m torch.distributed.launch --nproc_per_node=1 \
|
|
| 216 |
"duration": 12.47,
|
| 217 |
"source": "librispeech_en"
|
| 218 |
}
|
| 219 |
-
```
|
| 220 |
-
|
| 221 |
-
|
| 222 |
-
## Usage Notes
|
| 223 |
-
|
| 224 |
-
1. The metadata in this repository contains paths to audio files that must match your local setup.
|
| 225 |
-
2. When fine-tuning, ensure your manifest files use the correct paths for your mounted directories.
|
| 226 |
-
3. For optimal performance:
|
| 227 |
-
- Use a GPU with at least 16GB VRAM
|
| 228 |
-
- Adjust batch size based on your GPU memory
|
| 229 |
-
- Consider gradient accumulation for larger effective batch sizes
|
| 230 |
-
- Monitor training with TensorBoard (accessible via port 6006)
|
| 231 |
-
|
| 232 |
-
## Common Issues and Solutions
|
| 233 |
-
|
| 234 |
-
1. **Path Mismatches**: Ensure audio file paths in manifests match the mounted directories in Docker
|
| 235 |
-
2. **Memory Issues**: Reduce batch size or use gradient accumulation
|
| 236 |
-
3. **Docker Permissions**: Ensure proper permissions for mounted volumes and audio devices
|
|
|
|
| 48 |
### People's Speech
|
| 49 |
- [People's Speech Dataset](https://huggingface.co/datasets/MLCommons/peoples_speech)
|
| 50 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 51 |
|
| 52 |
## Dataset Statistics
|
| 53 |
|
|
|
|
| 109 |
"duration": 12.47,
|
| 110 |
"source": "librispeech_en"
|
| 111 |
}
|
| 112 |
+
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|