Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
Server error while post-processing the rows. This occured on row 22. Please report the issue.
Error code:   RowsPostProcessingError

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

🎙️ BanglaSpeechCorpus-321: Large-Scale Long-Form Bangla Speech Corpus

Dataset Summary

BanglaSpeechCorpus-321 is an extended, large-scale Bangla (Bengali) speech corpus for Automatic Speech Recognition (ASR), featuring 321.2 hours of naturally occurring Bangla speech across 401 recordings. This is the expanded successor to Bangla_Speech_Corpus, covering a broader set of YouTube channels including drama serials, audiobooks, and entertainment content.

With over 303,000 segments, 1.79 million words, and a maximum segment duration of 20 seconds, this corpus is purpose-built for long-form, continuous-transcript ASR evaluation and training — a regime where current state-of-the-art models remain significantly undertested for Bangla.


Key Statistics

Property Value
Language Bengali (bn)
Total Recordings 401
Total Segments 303,497
Total Words 1,798,918
Total Duration 321.2 hours
Max Segment Duration 20.0 seconds
Audio Format WAV (16 kHz, mono)
Storage ~206.47 GB
Task Automatic Speech Recognition (ASR)

Source Channels

Channels include all sources from the original BanglaVoice-LF 191 corpus, plus additional channels:

Channel Notes
Eagle Premier Station Drama
Banglavision DRAMA Drama
Maasranga Drama Drama
CMV Music & Drama
KS Entertainment Entertainment
GOLLACHUT Entertainment
Raad Drama Drama
Rabbit Entertainment Drama
+ Additional channels New in this release

Full per-channel breakdown is available in video_checklist.csv.

Dataset Structure

Bangla_speech_corpus-321/
├── audio/                  # 401 WAV audio files (16 kHz, mono)
├── subtitles_raw/          # Raw YouTube subtitle files
├── transcripts/            # Cleaned, aligned transcript JSONs (401 files)
├── video_checklist.csv     # Per-video metadata
└── completed_videos.log    # Processing log

manifest.jsonl Schema

Each line is a JSON object representing one segment:

{
  "id": "video_id_seg_00042",
  "audio_path": "audio/video_id.wav",
  "transcript": "বাংলা ট্রান্সক্রিপ্ট এখানে লেখা আছে",
  "channel": "Banglavision DRAMA",
  "start": 124.5,
  "end": 139.2,
  "duration": 14.7,
  "source_url": "https://www.youtube.com/watch?v=..."
}

Transcript JSON Schema

Each file in transcripts/ contains:

{
  "video_id": "abc123",
  "channel": "Eagle Premier Station",
  "duration_seconds": 2910.0,
  "segments": [
    { "start": 0.0, "end": 18.4, "text": "..." },
    { "start": 18.4, "end": 35.1, "text": "..." }
  ]
}

How to Use

Basic Loading

from datasets import load_dataset

dataset = load_dataset("Suprio85/Bangla_speech_corpus-321")
print(dataset)

Load with Audio Column

from datasets import load_dataset, Audio

dataset = load_dataset("Suprio85/Bangla_speech_corpus-321", split="train")
dataset = dataset.cast_column("audio", Audio(sampling_rate=16_000))

sample = dataset[0]
print("Transcript:", sample["transcript"])
print("Duration:  ", sample["duration"], "seconds")
print("Channel:   ", sample["channel"])

Stream Large Dataset (Recommended for 321h)

from datasets import load_dataset

# Streaming avoids downloading all 206 GB upfront
dataset = load_dataset(
    "Suprio85/Bangla_speech_corpus-321",
    split="train",
    streaming=True
)

for sample in dataset.take(5):
    print(sample["transcript"])

Fine-tuning Whisper on This Dataset

from transformers import WhisperProcessor, WhisperForConditionalGeneration
from datasets import load_dataset, Audio

processor = WhisperProcessor.from_pretrained("openai/whisper-large-v3")
model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-large-v3")
model.config.forced_decoder_ids = processor.get_decoder_prompt_ids(
    language="bengali", task="transcribe"
)

dataset = load_dataset("Suprio85/Bangla_speech_corpus-321", split="train")
dataset = dataset.cast_column("audio", Audio(sampling_rate=16_000))

def prepare_batch(batch):
    audio = batch["audio"]
    batch["input_features"] = processor(
        audio["array"],
        sampling_rate=audio["sampling_rate"],
        return_tensors="pt"
    ).input_features[0]
    batch["labels"] = processor.tokenizer(batch["transcript"]).input_ids
    return batch

dataset = dataset.map(prepare_batch, remove_columns=dataset.column_names)

Comparison: 155h vs 321h

Property BanglaSpeechCorpus 155 BanglaSpeechCorpus 321
Recordings 249 401
Total Hours ~155 hrs 321.2 hrs
Total Segments 303,497
Total Words 1,798,918
Channels 9 9 + new
Storage ~101 GB ~206 GB

Motivation

Long-form continuous-transcript evaluation in Bangla ASR remains severely limited. Most public datasets consist of short, isolated, studio-recorded utterances that do not reflect real-world usage. BanglaVoice-LF 321 directly addresses this by providing:

  • 321 hours of naturalistic in-the-wild Bangla speech
  • Continuous, full-episode transcripts from broadcast drama and entertainment
  • 20-second max segments suitable for both training and evaluation pipelines
  • 1.79M words of rich, colloquial, and formal Bangla vocabulary

Intended Uses

  • Training and fine-tuning Bangla ASR models (Whisper, wav2vec2, MMS)
  • Long-form speech recognition benchmarking
  • Bangla language model and tokenizer training
  • Speech segmentation and forced alignment research
  • Downstream Bangla NLP tasks on transcript text

Limitations

  • Transcripts are derived from YouTube auto-subtitles and may contain errors in emotionally expressive or fast speech.
  • Background music and overlapping speech are present in some drama recordings.
  • Speaker-level metadata (gender, age, region) is not annotated.

Citation

If you use this dataset in your research, please cite:

@dataset{Bangla_Speech_Corpus-321_2025,
  author    = {Suprio85},
  title     = {BanglaVoice-LF 321: Large-Scale Long-Form Bangla Speech Corpus},
  year      = {2025},
  publisher = {Hugging Face},
  url       = {https://huggingface.co/datasets/Suprio85/Bangla_speech_corpus-321}
}

Also consider citing the original 155h corpus:

@dataset{Bangla_Speech_Corpus_2025,
  author    = {Suprio85},
  title     = {BanglaVoice-LF: A Long-Form Bangla Speech Corpus},
  year      = {2025},
  publisher = {Hugging Face},
  url       = {https://huggingface.co/datasets/Suprio85/Bangla_Speech_Corpus}
}

License

Released under Creative Commons Attribution 4.0 International (CC BY 4.0). Free to use, share, and adapt with attribution.


Related Datasets

Downloads last month
40