pollitoconpapass/test-genesis-quzbible-v4
Viewer • Updated • 43 • 7
How to use pollitoconpapass/whisper-small-finetuned with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("automatic-speech-recognition", model="pollitoconpapass/whisper-small-finetuned") # Load model directly
from transformers import AutoProcessor, AutoModelForSpeechSeq2Seq
processor = AutoProcessor.from_pretrained("pollitoconpapass/whisper-small-finetuned")
model = AutoModelForSpeechSeq2Seq.from_pretrained("pollitoconpapass/whisper-small-finetuned")This model is a fine-tuned version of openai/whisper-small on the Genesis Cuzco Quechua Bible dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss |
|---|---|---|---|
| 0.0001 | 363.6364 | 500 | 0.0001 |
| 0.0 | 727.2727 | 1000 | 0.0000 |
| 0.0 | 1090.9091 | 1500 | 0.0000 |
| 0.0 | 1454.5455 | 2000 | 0.0000 |
| 0.0 | 1818.1818 | 2500 | 0.0000 |
| 0.0 | 2181.8182 | 3000 | 0.0000 |
| 0.0 | 2545.4545 | 3500 | 0.0000 |
| 0.0 | 2909.0909 | 4000 | 0.0000 |
| 0.0 | 3272.7273 | 4500 | 0.0000 |
| 0.0 | 3636.3636 | 5000 | 0.0000 |
Base model
openai/whisper-small