svarah_processed / README.md
ahamedddd's picture
Update README.md
cd3a599 verified
---
dataset_info:
features:
- name: input_features
sequence:
sequence: float32
- name: labels
sequence: int64
splits:
- name: test
num_bytes: 10227942104
num_examples: 6656
download_size: 2056491222
dataset_size: 10227942104
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
The input_features are nothing but the values generated after passing the dataset's audio array through a whisper processor's feature extraction and the field 'labels' consists of the tokenized(using whisper tokenizer) ground truths.
The following is the link for what I did with the sarvah dataset and how I trained it on whisper-large-v3-turbo.
The training steps for whisper-large-v3 are same.
https://colab.research.google.com/drive/1oD0v7MWZ9WJqk7tZYThwgTUM85PTEhMN?usp=sharing