kuanhuggingface's picture
Upload README.md with huggingface_hub
8895ed0
---
dataset_info:
features:
- name: file_id
dtype: string
- name: instruction
dtype: string
- name: transcription
dtype: string
- name: src_speech_tokenizer_0
sequence: int64
- name: src_speech_tokenizer_1
sequence: int64
- name: src_speech_tokenizer_2
sequence: int64
- name: src_speech_tokenizer_3
sequence: int64
- name: src_speech_tokenizer_4
sequence: int64
- name: src_speech_tokenizer_5
sequence: int64
- name: src_speech_tokenizer_6
sequence: int64
- name: src_speech_tokenizer_7
sequence: int64
- name: tgt_speech_tokenizer_0
sequence: int64
- name: tgt_speech_tokenizer_1
sequence: int64
- name: tgt_speech_tokenizer_2
sequence: int64
- name: tgt_speech_tokenizer_3
sequence: int64
- name: tgt_speech_tokenizer_4
sequence: int64
- name: tgt_speech_tokenizer_5
sequence: int64
- name: tgt_speech_tokenizer_6
sequence: int64
- name: tgt_speech_tokenizer_7
sequence: int64
splits:
- name: train
num_bytes: 4052674372
num_examples: 171430
- name: validation
num_bytes: 235209418
num_examples: 10000
- name: test
num_bytes: 236201364
num_examples: 10000
download_size: 321297813
dataset_size: 4524085154
---
# Dataset Card for "amazon_tts_speech_tokenizer"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)