Stanwang1210's picture
Upload README.md with huggingface_hub
0ecabb4
|
raw
history blame
831 Bytes
metadata
configs:
  - config_name: default
    data_files:
      - split: test.other
        path: data/test.other-*
      - split: validation.other
        path: data/validation.other-*
      - split: train.other.500
        path: data/train.other.500-*
dataset_info:
  features:
    - name: text
      dtype: string
    - name: id
      dtype: string
    - name: audio_codes
      sequence:
        sequence: int64
  splits:
    - name: test.other
      num_bytes: 62049899
      num_examples: 2939
    - name: validation.other
      num_bytes: 59498714
      num_examples: 2864
    - name: train.other.500
      num_bytes: 5761561617
      num_examples: 148688
  download_size: 929586038
  dataset_size: 5883110230

Dataset Card for "speech_tokenizer_16k"

More Information needed