Update README.md
Browse files
README.md
CHANGED
|
@@ -9,4 +9,67 @@ base_model:
|
|
| 9 |
- openai/whisper-small
|
| 10 |
pipeline_tag: automatic-speech-recognition
|
| 11 |
library_name: transformers
|
| 12 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 9 |
- openai/whisper-small
|
| 10 |
pipeline_tag: automatic-speech-recognition
|
| 11 |
library_name: transformers
|
| 12 |
+
tags:
|
| 13 |
+
- audio
|
| 14 |
+
- automatic-speech-recognition
|
| 15 |
+
---
|
| 16 |
+
# Model Card for Model ID
|
| 17 |
+
|
| 18 |
+
<!-- Provide a quick summary of what the model is/does. -->
|
| 19 |
+
|
| 20 |
+
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
|
| 21 |
+
|
| 22 |
+
## Model Details
|
| 23 |
+
|
| 24 |
+
### Model Description
|
| 25 |
+
|
| 26 |
+
This model is a fine-tuned version of OpenAI's Whisper-small, optimized for isiZulu Automatic Speech Recognition (ASR). It has been trained on the NCHLT isiZUlu Speech Corpus to improve its performance on isiXhosa speech transcription tasks.
|
| 27 |
+
|
| 28 |
+
### Base Model
|
| 29 |
+
Name: openai/whisper-small
|
| 30 |
+
Type: Automatic Speech Recognition (ASR)
|
| 31 |
+
Original language: Multilingual
|
| 32 |
+
|
| 33 |
+
### Performance
|
| 34 |
+
- Word Error Rate (WER): 31.87%
|
| 35 |
+
- Character Error Rate (CER): 9.43%
|
| 36 |
+
|
| 37 |
+
### Usage
|
| 38 |
+
To use this model for inference:
|
| 39 |
+
```
|
| 40 |
+
from transformers import WhisperForConditionalGeneration, WhisperProcessor
|
| 41 |
+
import torch
|
| 42 |
+
|
| 43 |
+
# Load model and processor
|
| 44 |
+
model = WhisperForConditionalGeneration.from_pretrained("nmoyo45/zu_whisper")
|
| 45 |
+
processor = WhisperProcessor.from_pretrained("nmoyo45/zu_whisper")
|
| 46 |
+
|
| 47 |
+
# Prepare your audio file (16kHz sampling rate)
|
| 48 |
+
audio_input = ... # Load your audio file here
|
| 49 |
+
|
| 50 |
+
# Process the audio
|
| 51 |
+
input_features = processor(audio_input, sampling_rate=16000, return_tensors="pt").input_features
|
| 52 |
+
|
| 53 |
+
# Generate token ids
|
| 54 |
+
predicted_ids = model.generate(input_features)
|
| 55 |
+
|
| 56 |
+
# Decode the token ids to text
|
| 57 |
+
transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
|
| 58 |
+
|
| 59 |
+
print(transcription)
|
| 60 |
+
```
|
| 61 |
+
|
| 62 |
+
### Dataset:
|
| 63 |
+
#### NCHLT isiZulu Speech Corpus:
|
| 64 |
+
- Size: Approximately 56 hours of transcribed speech
|
| 65 |
+
- Speakers: 210 (98 female, 112 male)
|
| 66 |
+
- Content: Prompted speech (3-5 word utterances read from a smartphone screen)
|
| 67 |
+
- Source: Audio recordings smartphone-collected in non-studio environment
|
| 68 |
+
- License: Creative Commons Attribution 3.0 Unported License (CC BY 3.0)
|
| 69 |
+
- Citation: N.J. de Vries, M.H. Davel, J. Badenhorst, W.D. Basson, F. de Wet, E. Barnard and A. de Waal, "A smartphone-based ASR data collection tool for under-resourced languages", Speech Communication, Volume 56, January 2014, pp 119–131.
|
| 70 |
+
|
| 71 |
+
#### Lwazi isiZulu ASR Corpus:
|
| 72 |
+
- Speakers: 199 Speakers
|
| 73 |
+
- Content: ~14 elicited utterances, ~16 phonetically balanced read sentences
|
| 74 |
+
- License: Creative Commons Attribution 2.5 South Africa License: http://creativecommons.org/licenses/by/2.5/za/legalcode
|
| 75 |
+
- Citation: E. Barnard, M. Davel and C. van Heerden, "ASR Corpus Design for Resource-Scarce Languages," in Proceedings of the 10th Annual Conference of the International Speech Communication Association (Interspeech), Brighton, United Kingdom, September 2009, pp. 2847-2850.
|