My Whisper Fine-Tunes (V2)
Collection
Whisper fine-tunes for my voice and vocab (tech, Hebrew). About 1 hour of training data so still very much POCs! • 4 items • Updated
How to use danielrosehill/daniel_whisper_finetune_medium_v2 with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("automatic-speech-recognition", model="danielrosehill/daniel_whisper_finetune_medium_v2") # Load model directly
from transformers import AutoProcessor, AutoModelForSpeechSeq2Seq
processor = AutoProcessor.from_pretrained("danielrosehill/daniel_whisper_finetune_medium_v2")
model = AutoModelForSpeechSeq2Seq.from_pretrained("danielrosehill/daniel_whisper_finetune_medium_v2")This model is a fine-tuned version of openai/whisper-medium on an unknown dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss |
|---|---|---|---|
| 1.1622 | 1.3158 | 50 | 0.4810 |
| 0.06 | 2.6316 | 100 | 0.1686 |
| 0.0207 | 3.9474 | 150 | 0.1639 |
| 0.0078 | 5.2632 | 200 | 0.1709 |
| 0.0036 | 6.5789 | 250 | 0.1799 |
Base model
openai/whisper-medium