SharktankIN-STT / README.md
MysticKit's picture
Update README.md
8f38c48 verified
metadata
dataset_info:
  features:
    - name: audio
      dtype: audio
    - name: text
      dtype: string
    - name: speaker_id
      dtype: string
  splits:
    - name: train
      num_bytes: 1778511869.196
      num_examples: 1918
  download_size: 1720453358
  dataset_size: 1778511869.196
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*

Shark Tank India TTS Dataset

Overview

This dataset contains 1918 audio-text pairs extracted from Shark Tank India Season 1 episodes. Each sample consists of a WAV audio clip with its corresponding Hindi/Hinglish transcript, formatted for Hinglish transcription to benchmark STT models.


Dataset Statistics

Metric Count
Total samples 1,918
Unique speakers 17 (SPEAKER_00 through SPEAKER_16 not accurate take these with grain of salt)
Total audio duration ~1.5 hours (estimated)
Valid audio-text pairs 1,918
Skipped samples 96 (empty transcripts)

Data Structure

The dataset is provided in Parquet format with the following columns:

Column Type Description
audio Audio Audio sample with metadata (WAV format)
text string Transcript text (Hindi/Hinglish)
speaker_id string Speaker identifier extracted from directory name

Sample Format

{
  "audio": {
    "bytes": null,
    "path": "/path/to/SPEAKER_00/clip.wav"
  },
  "text": "Yaar mujhe bhi office ke kaam se chhah mahine ke lie Delhi se baingalor jaana",
  "speaker_id": "SPEAKER_00"
}

Data Characteristics & Limitations

⚠️ Important notes about data quality:

  • Language: Content is primarily in Hindi and Hinglish (Hindi-English mix)
  • Speaker variety: 17 distinct speakers across 17 episodes (not accurate)
  • Audio quality: Source quality varies by episode; some segments may have background noise or interruptions
  • Transcript accuracy: Transcripts are automatically generated and may contain errors
  • Segmentation: Clips are automatically segmented and aligned; boundaries may not always align perfectly with sentence boundaries

Loading the Dataset

Using Hugging Face Datasets

from datasets import load_dataset

# Load from local Parquet file
dataset = load_dataset("parquet", data_files="train.parquet", split="train")

# Or load from Hugging Face Hub (if uploaded)
dataset = load_dataset("username/shark-tank-india-s1-tts", split="train")

# Iterate through samples
for sample in dataset:
    audio = sample["audio"]
    text = sample["text"]
    speaker_id = sample["speaker_id"]

    # Audio data can be accessed as:
    audio_array = audio["array"]
    sampling_rate = audio["sampling_rate"]

    # Use for TTS training...

Using Pandas

import pandas as pd

# Load Parquet file
df = pd.read_parquet("train.parquet")

# Explore the dataset
print(df.head())
print(f"Dataset shape: {df.shape}")
print(f"Unique speakers: {df['speaker_id'].nunique()}")

Versions

Version 1.0

  • Content: Shark Tank India Season 1 only
  • Speakers: 17 unique speakers (SPEAKER_00 through SPEAKER_16)
  • Format: Parquet
  • Total samples: 1,918

Future Versions

  • Additional seasons will be added incrementally
  • Transcript cleaning and refinement
  • Speaker embedding metadata (optional)