Datasets:
Formats:
parquet
Languages:
English
Size:
10K - 100K
Tags:
speech-recognition
emotion-detection
age-detection
gender-classication
intent-prediction
entity-tagging
metadata
task_categories:
- audio-classification
- automatic-speech-recognition
language:
- en
tags:
- speech-recognition
- emotion-detection
- age-detection
- gender-classication
- intent-prediction
- entity-tagging
- av-speech
Meta Speech Recognition English Dataset (Set 2)
This dataset contains both metadata and audio files for English speech recognition samples.
Dataset Statistics
Splits and Sample Counts
- train: 42961 samples
- valid: 2387 samples
- test: 2387 samples
Example Samples
train
{
"audio_filepath": "/external1/datasets/asr-himanshu/avspeech-data/audio/AzSutepklXI_2.wav",
"text": "To Jesus, so God is faithful, because when he keeps, you know, when, when when you ask him to do something, he keeps his What. He kept, his cobonut With Jacob, you know through the years, he. AGE_18_30 GER_FEMALE EMOTION_ANG INTENT_INFORM",
"duration": 14.79
}
{
"audio_filepath": "/external1/datasets/asr-himanshu/avspeech-data/audio/ZwNpsW26jp4_13.wav",
"text": "Oppressortein would like to cover a host of things, but the first thing we'd like to find out is how is the Israeli economy doing today. AGE_30_45 GER_MALE EMOTION_NEU INTENT_QUESTION",
"duration": 6.39
}
valid
{
"audio_filepath": "/external1/datasets/asr-himanshu/avspeech-data/audio/76yqm7rlKnE_4.wav",
"text": "And disowns throughout its entirety. AGE_18_30 GER_MALE EMOTION_NEU INTENT_INFORM",
"duration": 3.0
}
{
"audio_filepath": "/external1/datasets/asr-himanshu/avspeech-data/audio/I-9GBnlAl_U_5.wav",
"text": "You students who continue to edify me daily in my life. AGE_30_45 GER_FEMALE EMOTION_ANG INTENT_INFORM",
"duration": 3.34
}
test
{
"audio_filepath": "/external1/datasets/asr-himanshu/avspeech-data/audio/LNtQPSUi1iQ_10.wav",
"text": "Know some details about the bild you can drill into individual. AGE_30_45 GER_MALE EMOTION_ANG INTENT_QUESTION",
"duration": 3.06
}
{
"audio_filepath": "/external1/datasets/asr-himanshu/avspeech-data/audio/9VU0GVCW0G4_10.wav",
"text": "Just be shuffling papers, so we vet each Parker and make a partner and make sure that they are going to provide students with. AGE_30_45 GER_MALE EMOTION_NEU INTENT_INFORM",
"duration": 6.87
}
Training NeMo Conformer ASR
1. Pull and Run NeMo Docker
# Pull the NeMo Docker image
docker pull nvcr.io/nvidia/nemo:24.05
# Run the container with GPU support
docker run --gpus all -it --rm \
-v /external1:/external1 \
-v /external2:/external2 \
-v /external3:/external3 \
--shm-size=8g \
-p 8888:8888 -p 6006:6006 \
--ulimit memlock=-1 \
--ulimit stack=67108864 \
nvcr.io/nvidia/nemo:24.05
2. Create Training Script
Create a script train_nemo_asr.py:
from nemo.collections.asr.models import EncDecCTCModel
from nemo.collections.asr.data.audio_to_text import TarredAudioToTextDataset
import pytorch_lightning as pl
from omegaconf import OmegaConf
import os
# Load the dataset from Hugging Face
from datasets import load_dataset
dataset = load_dataset("WhissleAI/Meta_STT_EN_Set2")
# Create config
config = OmegaConf.create({
'model': {
'name': 'EncDecCTCModel',
'train_ds': {
'manifest_filepath': None, # Will be set dynamically
'batch_size': 32,
'shuffle': True,
'num_workers': 4,
'pin_memory': True,
'use_start_end_token': False,
},
'validation_ds': {
'manifest_filepath': None, # Will be set dynamically
'batch_size': 32,
'shuffle': False,
'num_workers': 4,
'pin_memory': True,
'use_start_end_token': False,
},
'optim': {
'name': 'adamw',
'lr': 0.001,
'weight_decay': 0.01,
},
'trainer': {
'devices': 1,
'accelerator': 'gpu',
'max_epochs': 100,
'precision': 16,
}
}
})
# Initialize model
model = EncDecCTCModel(cfg=config.model)
# Create trainer
trainer = pl.Trainer(**config.model.trainer)
# Train
trainer.fit(model)
3. Create Config File
Create a config file config.yaml:
model:
name: "EncDecCTCModel"
train_ds:
manifest_filepath: "train.json"
batch_size: 32
shuffle: true
num_workers: 4
pin_memory: true
use_start_end_token: false
validation_ds:
manifest_filepath: "valid.json"
batch_size: 32
shuffle: false
num_workers: 4
pin_memory: true
use_start_end_token: false
optim:
name: adamw
lr: 0.001
weight_decay: 0.01
trainer:
devices: 1
accelerator: "gpu"
max_epochs: 100
precision: 16
4. Start Training
# Inside the NeMo container
python -m torch.distributed.launch --nproc_per_node=1 \
train_nemo_asr.py \
--config-path=. \
--config-name=config.yaml
Usage Notes
- The dataset includes both metadata and audio files.
- Audio files are stored in the dataset repository.
- For optimal performance:
- Use a GPU with at least 16GB VRAM
- Adjust batch size based on your GPU memory
- Consider gradient accumulation for larger effective batch sizes
- Monitor training with TensorBoard (accessible via port 6006)
Common Issues and Solutions
Memory Issues:
- Reduce batch size if you encounter OOM errors
- Use gradient accumulation for larger effective batch sizes
- Enable mixed precision training (fp16)
Training Speed:
- Increase num_workers based on your CPU cores
- Use pin_memory=True for faster data transfer to GPU
- Consider using tarred datasets for faster I/O
Model Performance:
- Adjust learning rate based on your batch size
- Use learning rate warmup for better convergence
- Consider using a pretrained model as initialization