legacy-datasets/common_voice
Updated • 1.4k • 144
How to use GleamEyeBeast/Mandarin_naive with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("automatic-speech-recognition", model="GleamEyeBeast/Mandarin_naive") # Load model directly
from transformers import AutoProcessor, AutoModelForCTC
processor = AutoProcessor.from_pretrained("GleamEyeBeast/Mandarin_naive")
model = AutoModelForCTC.from_pretrained("GleamEyeBeast/Mandarin_naive")# Load model directly
from transformers import AutoProcessor, AutoModelForCTC
processor = AutoProcessor.from_pretrained("GleamEyeBeast/Mandarin_naive")
model = AutoModelForCTC.from_pretrained("GleamEyeBeast/Mandarin_naive")This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common_voice dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss | Wer |
|---|---|---|---|---|
| 4.8963 | 3.67 | 400 | 1.0645 | 0.8783 |
| 0.5506 | 7.34 | 800 | 0.5032 | 0.5389 |
| 0.2111 | 11.01 | 1200 | 0.4765 | 0.4712 |
| 0.1336 | 14.68 | 1600 | 0.4815 | 0.4511 |
| 0.0974 | 18.35 | 2000 | 0.4956 | 0.4370 |
| 0.0748 | 22.02 | 2400 | 0.4881 | 0.4235 |
| 0.0584 | 25.69 | 2800 | 0.4732 | 0.4193 |
| 0.0458 | 29.36 | 3200 | 0.4584 | 0.3999 |
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("automatic-speech-recognition", model="GleamEyeBeast/Mandarin_naive")