nguyendv02/ViMD_Dataset
Viewer • Updated • 19k • 1.55k • 18
How to use mo-nguyen-tmo/phowhisper_vimd with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("automatic-speech-recognition", model="mo-nguyen-tmo/phowhisper_vimd") # Load model directly
from transformers import AutoProcessor, AutoModelForSpeechSeq2Seq
processor = AutoProcessor.from_pretrained("mo-nguyen-tmo/phowhisper_vimd")
model = AutoModelForSpeechSeq2Seq.from_pretrained("mo-nguyen-tmo/phowhisper_vimd")This model is a fine-tuned version of vinai/PhoWhisper-base on the ViMD dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss | Wer |
|---|---|---|---|---|
| 0.2943 | 1.0650 | 1000 | 0.3295 | 20.7727 |
| 0.2157 | 2.1299 | 2000 | 0.3062 | 19.3097 |
| 0.1877 | 3.1949 | 3000 | 0.3052 | 19.4259 |
| 0.1673 | 4.2599 | 4000 | 0.3062 | 19.3867 |
Base model
vinai/PhoWhisper-base