CTC-DRO: Robust Optimization for Reducing Language Disparities in Speech Recognition
Paper
•
2502.01777
•
Published
This repository contains an automatic speech recognition (ASR) model fine-tuned from openai/whisper-large-v3 using the principles of CTC-DRO applied to Whisper's seq2seq architecture.
The model was trained on balanced training data from set 2 (eng, fas, hrv, ita, slk, yue).
DRO hyperparameters: eta=5e-3, alpha=0.1, aggregation: mean
This model is intended for multilingual ASR. Users can run inference using the HuggingFace Transformers library:
import torch
import librosa
from transformers import WhisperForConditionalGeneration, WhisperProcessor
model = WhisperForConditionalGeneration.from_pretrained("bartelds/whisper-dro-set2-dro")
processor = WhisperProcessor.from_pretrained("bartelds/whisper-dro-set2-dro")
model.eval()
audio, sr = librosa.load("input.wav", sr=16000)
inputs = processor.feature_extractor(audio, sampling_rate=16000, return_tensors="pt")
with torch.no_grad():
generated = model.generate(input_features=inputs.input_features)
text = processor.tokenizer.batch_decode(generated, skip_special_tokens=True)[0]
print("Recognized text:", text)
pip install transformers torch librosafrom_pretrained() as shown above.openai/whisper-large-v3