PolyAI/minds14
Viewer • Updated • 16.3k • 6.43k • 102
How to use Shamik/whisper-tiny-polyAI-minds14 with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("automatic-speech-recognition", model="Shamik/whisper-tiny-polyAI-minds14") # Load model directly
from transformers import AutoProcessor, AutoModelForSpeechSeq2Seq
processor = AutoProcessor.from_pretrained("Shamik/whisper-tiny-polyAI-minds14")
model = AutoModelForSpeechSeq2Seq.from_pretrained("Shamik/whisper-tiny-polyAI-minds14")# Load model directly
from transformers import AutoProcessor, AutoModelForSpeechSeq2Seq
processor = AutoProcessor.from_pretrained("Shamik/whisper-tiny-polyAI-minds14")
model = AutoModelForSpeechSeq2Seq.from_pretrained("Shamik/whisper-tiny-polyAI-minds14")This model is a fine-tuned version of openai/whisper-tiny on the Speech Transcription in English from e-banking domain. dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|---|---|---|---|---|---|
| 0.3501 | 3.57 | 100 | 0.7134 | 0.4568 | 0.4212 |
| 0.044 | 7.14 | 200 | 0.7639 | 0.4096 | 0.3746 |
| 0.0048 | 10.71 | 300 | 0.8265 | 0.4109 | 0.3854 |
| 0.0021 | 14.29 | 400 | 0.8668 | 0.4009 | 0.3823 |
Base model
openai/whisper-tiny
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("automatic-speech-recognition", model="Shamik/whisper-tiny-polyAI-minds14")