mrtv4_voices / README.md
freococo's picture
Update README.md
6487a7f verified
metadata
license: other
license_name: custom
license_link: LICENSE
pretty_name: MRTV4 Official Voices
language:
  - my
tags:
  - audio
  - speech-recognition
  - asr
  - myanmar
  - burmese
  - low-resource
  - fair-use
  - tiktok
  - webdataset
task_categories:
  - automatic-speech-recognition
  - audio-classification

MRTV4 Official Voices

The official voice of Myanmar's national broadcaster, ready for the age of AI.

MRTV4 Official Voices is a focused collection of 2,368 high-quality audio segments (totaling 1h 51m 54s of speech) derived from the public broadcasts of MRTV4 Official — one of Myanmar's leading television channels.

The source channel regularly features:

  • National news broadcasts and public announcements.
  • Educational programming and cultural segments.
  • Formal presentations and official addresses.

These videos capture the clear, articulate, and standardized Burmese used in professional broadcasting, making this dataset an invaluable resource for building AI models that require formal and high-fidelity speech.


❤️ Why I Built This

  • Myanmar (Burmese) is often labeled a “low-resource language” in the AI world.
  • I don’t reject that label because it’s false — I reject it because it reflects global neglect.
  • I built this dataset to show what’s possible — to give Myanmar speech the visibility, respect, and technical foundation it deserves.

I care about languages. I care about people being heard. And if AI is going to learn from voices — I want it to hear mine, ours, Myanmar’s.

If you want your voice to be heard — you must first teach the machines to listen.


🕊️ Why It Matters to Me

We will come, and we will go. But if your voice is integrated into AI technology — it will go on. Forever.

I cannot build you a pyramid like the ancient Egyptians did. But I can build something more accessible, more global: A living archive — of your beautiful, strong, and clear voices.

Maybe, just maybe — AI will speak our beautiful Myanmar language through your voice. And I believe it will. I truly do. 🙂


🔍 What's Included

  • 2,368 audio-text chunks
  • ~1.8 hours of clear Burmese broadcast speech (1h 51m 54s)
  • Auto-transcribed captions with timestamps
  • Rich video metadata (title, views, likes, hashtags)
  • Packaged as a WebDataset-ready .tar.gz shard for efficient streaming & training

📂 Dataset Structure

This dataset is packaged as a .tar.gz archive containing paired audio, transcript, and metadata files in the WebDataset format.

Each sample consists of three files, grouped by a unique key:

  • .mp3 — a short audio chunk extracted from a video.
  • .txt — the aligned transcript for the audio chunk.
  • .json — rich metadata including the source video context.

All files are named using UUIDs:

a3f1d9e671a44b88.mp3
a3f1d9e671a44b88.txt
a3f1d9e671a44b88.json

The .json file contains the following fields:

Field Description
file_name Name of the chunked audio file
original_file Source video’s .mp3 filename
transcript Burmese caption (also in the separate .txt file)
duration Duration of the chunk (in seconds)
video_url Link to the original source video
language Always "my" (Myanmar)
title Title of the video
description Full video description
view_count View count at the time of download
like_count Like count
channel Publisher name and description
upload_date In YYYYMMDD format
hashtags List of hashtags from the description
thumbnail URL to video thumbnail
source URL to the source TikTok channel

🚀 How to Use

This dataset is designed for modern training pipelines and is compatible with 🤗 Hugging Face Datasets.

✅ Load using Hugging Face Datasets (streaming)

Streaming is a memory-efficient way to use the dataset without downloading it all at once.

from datasets import load_dataset

# The library automatically finds the .tar.gz shard in the repo
ds = load_dataset(
    "freococo/mrtv4_official_voices",
    split="train",
    streaming=True
)

# Iterate through the first 5 samples
for sample in ds.take(5):
    # The transcript is a top-level feature for easy access
    print(f"🎙️ Transcript: {sample['txt']}")

    # The audio data is in the 'mp3' column
    audio_data = sample['mp3']
    audio_array = audio_data['array']
    sampling_rate = audio_data['sampling_rate']
    print(f"🎧 Audio loaded with shape {audio_array.shape} and rate {sampling_rate} Hz")

    # The 'json' column is ALREADY a Python dictionary
    metadata = sample['json']
    print(f"📺 Channel: {metadata.get('channel')}")
    print(f"🎥 Video URL: {metadata.get('video_url')}")
    print("---")

🙏 Special Thanks

This dataset would not exist without the incredible broadcasting and production efforts of:

  • 🌟 MRTV4 and its entire organization.
  • 🎤 The news anchors, presenters, and journalists who deliver information with clarity and professionalism.
  • 🎥 The producers, editors, and technical staff who ensure the high quality of public broadcasts.

These individuals are the voice of a nation's formal discourse. Their work provides an authoritative and clean source of the Burmese language, essential for building robust AI.


Thank you for giving us your voices. Now, they may echo in the machines we build — not to replace you, but to remember you.

🫡🇲🇲🧠📣

⚠️ Limitations

While this dataset offers high-quality broadcast audio, it is not without limitations.

  • Auto-caption errors: All transcripts were generated by an automated system (Whisper). While the audio is clear, some segments may contain transcription errors, especially with proper nouns or technical terms.
  • No human corrections: No post-processing or human-in-the-loop editing was performed. This dataset reflects the raw, real-world performance of an automated pipeline.
  • Audio quality: While generally excellent, some clips may include broadcast-specific sounds like chyrons, jingles, or minimal background audio.
  • Representational Focus: The dataset primarily features formal, broadcast-standard Burmese. It does not represent the full diversity of conversational speech or regional dialects in Myanmar.

✅ But Here's the Strength

This dataset's power lies in its quality and formality. Unlike noisy, conversational datasets, it provides a clean baseline of professional, articulate Burmese speech.

This makes it exceptionally well-suited for:

  • Fine-tuning ASR models for higher accuracy on formal language.
  • Training text-to-speech (TTS) models that require a clear, authoritative voice.
  • Providing a high-quality reference for linguistic analysis.

It is not a huge dataset — but it is a powerful one. It is not conversational — but it is clear. And a clear voice is the foundation of understanding.

📄 License

This dataset is released under a Fair Use / Research-Only License.

It is intended for:

  • ✅ Non-commercial research
  • ✅ Educational use
  • ✅ Language preservation
  • ✅ Open AI development for Burmese (Myanmar) speech

All content was sourced from MRTV4 Official's public channels. For any commercial inquiries, please contact the original content owner directly.

For full details, see the LICENSE file in this repository.

📚 Citation

@misc{freococo_2024_mrtv4,
  author       = {freococo},
  title        = {MRTV4 Official Voices: A Dataset of Burmese Broadcast Speech},
  year         = {2024},
  howpublished = {Hugging Face Datasets},
  url          = {https://huggingface.co/datasets/freococo/mrtv4_official_voices}
}