dhakar's picture
Update README.md
3409ffb verified
|
raw
history blame
2.35 kB
metadata
dataset_info:
  features:
    - name: file_name
      dtype: string
    - name: uni
      dtype: string
    - name: wylie
      dtype: string
    - name: url
      dtype: string
    - name: dept
      dtype: string
    - name: grade
      dtype: int64
    - name: char_len
      dtype: int64
    - name: audio_len
      dtype: float64
    - name: original_id
      dtype: string
    - name: strata
      dtype: string
    - name: __index_level_0__
      dtype: int64
  splits:
    - name: train
      num_bytes: 4017714
      num_examples: 6988
    - name: validation
      num_bytes: 212173
      num_examples: 368
  download_size: 1451597
  dataset_size: 4229887
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: validation
        path: data/validation-*

Dataset Structure

  • Features:

    • file_name: Name of the file.
    • uni: Tibetan Unicode text.
    • wylie: Wylie transliteration.
    • url: Source URL.
    • dept: Department or category.
    • grade: Grade/class level.
    • char_len: Number of characters.
    • audio_len: Length of audio (seconds).
    • original_id: Original document or audio ID.
    • strata: Data strata or group.
    • __index_level_0__: Index.
  • Splits: train, validation


๐Ÿ“Š Split-wise Statistics

Split # Samples Total Char Length Total Audio Length (hr)
train 6988 613604 10.5
validation 368 32462 0.56
Total 7356 646066 11.06

strata value counts:

strata train_count validation_count
70-80__long__Teaching 3064 161
70-80__medium__Teaching 1347 71
80-90__long__Practice 1011 53
70-80__long__Q&A 635 34
70-80__long__Prayer 593 31
70-80__medium__Prayer 172 9
70-80__short__Teaching 105 6
70-80__medium__Q&A 61 3

๐Ÿš€ Usage

from datasets import load_dataset
ds = load_dataset("your_namespace/your_dataset", split="train")