Update README.md
Browse files
README.md
CHANGED
|
@@ -37,9 +37,76 @@ model-index:
|
|
| 37 |
---
|
| 38 |
|
| 39 |
|
|
|
|
| 40 |
|
|
|
|
| 41 |
|
| 42 |
-
|
| 43 |
|
| 44 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 45 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 37 |
---
|
| 38 |
|
| 39 |
|
| 40 |
+
# Voxtral small LoRA finetuned on CoRaL release 1
|
| 41 |
|
| 42 |
+
This is a Danish state of the art automatic speech recognition (ASR) model, which combines the decoder and audio-adapter of [**Voxtral-Small-24B-2507**](mistralai/Voxtral-Small-24B-2507) with the encoder from [**roest-whisper-large-v1**](CoRal-project/roest-whisper-large-v1). The decoder and audio-adapter were finetuned using LoRA for 2 epochs on the Danish [coral dataset](CoRal-project/coral) for automatic speech recognition (ASR).
|
| 43 |
|
| 44 |
+
## Evaluation Results
|
| 45 |
|
| 46 |
+
| Model | Number of parameters | [CoRal](https://huggingface.co/datasets/alexandrainst/coral/viewer/read_aloud/test) CER | [CoRal](https://huggingface.co/datasets/alexandrainst/coral/viewer/read_aloud/test) WER |
|
| 47 |
+
|:---|---:|---:|---:|
|
| 48 |
+
| [hinge/danstral-v1](hinge/danstral-v1) | 24B | **4.2% ± 0.2%** | **9.7% ± 0.3%** |
|
| 49 |
+
| [Alvenir/coral-1-whisper-large](https://huggingface.co/Alvenir/coral-1-whisper-large) | 1.540B | 4.3% ± 0.2% | 10.4% ± 0.3% |
|
| 50 |
+
| [alexandrainst/roest-315m](https://huggingface.co/alexandrainst/roest-315m) | 0.315B | 6.6% ± 0.2% | 17.0% ± 0.4% |
|
| 51 |
+
| [mhenrichsen/hviske-v2](https://huggingface.co/syvai/hviske-v2) | 1.540B | 4.7% ± 0.07% | 11.8% ± 0.3% |
|
| 52 |
+
| [openai/whisper-large-v3](https://hf.co/openai/whisper-large-v3) | 1.540B | 11.4% ± 0.3% | 28.3% ± 0.6% |
|
| 53 |
|
| 54 |
+
|
| 55 |
+
## Limitations
|
| 56 |
+
danstral-v1:
|
| 57 |
+
- is a finetune of voxtral-small-24b, whose encoder is a finetune of mistral-small-24b - a highly capable multilingual LLM currently ranked 13 on the [Danish generative leaderboard](https://euroeval.com/leaderboards/Monolingual/danish/#__tabbed_1_1). Although Mistral does not [disclose their datasets used for training](https://help.mistral.ai/en/articles/347390-does-mistral-ai-communicate-on-the-training-datasets), it is likely that Danish wikipedia articles were used in the training cohort of mistral small. Since the CoRaL test split also contains read aloud samples from Danish wikipedia, there is a risk of data leakage, which would influence the test scores.
|
| 58 |
+
- is huge, 16x the size of whisper large with only moddest performance improvements. However, the LoRA adapter amount to just 25mil parameters.
|
| 59 |
+
- is finetuned solely on the coral v1 dataset and performance may deterioate significantly for other data sources.
|
| 60 |
+
|
| 61 |
+
## Future work and ideas
|
| 62 |
+
- SOTA performance was achieved using a LoRA adapter with 25M parameters. A full finetune on larger GPU's and bigger datasets will likely give even better results
|
| 63 |
+
- Using danstral-v1 for knowledge distillation to train smaller models
|
| 64 |
+
|
| 65 |
+
|
| 66 |
+
## How to use
|
| 67 |
+
|
| 68 |
+
```python
|
| 69 |
+
from transformers import VoxtralForConditionalGeneration, AutoProcessor, WhisperForConditionalGeneration
|
| 70 |
+
import torch
|
| 71 |
+
from peft import PeftModel
|
| 72 |
+
from datasets import load_dataset, Audio
|
| 73 |
+
import os
|
| 74 |
+
|
| 75 |
+
repo_id = "mistralai/Voxtral-Small-24B-2507"
|
| 76 |
+
|
| 77 |
+
processor = AutoProcessor.from_pretrained(repo_id)
|
| 78 |
+
model = VoxtralForConditionalGeneration.from_pretrained(repo_id, torch_dtype=torch.bfloat16, device_map="auto",attn_implementation="flash_attention_2")
|
| 79 |
+
|
| 80 |
+
# Load audio encoder
|
| 81 |
+
whisper_model = WhisperForConditionalGeneration.from_pretrained(
|
| 82 |
+
"CoRal-project/roest-whisper-large-v1",
|
| 83 |
+
torch_dtype=torch.bfloat16,
|
| 84 |
+
attn_implementation="flash_attention_2"
|
| 85 |
+
)
|
| 86 |
+
|
| 87 |
+
whisper_encoder_state_dict = whisper_model.model.encoder.state_dict()
|
| 88 |
+
model.audio_tower.load_state_dict(whisper_encoder_state_dict)
|
| 89 |
+
|
| 90 |
+
# Load LoRA adapters
|
| 91 |
+
model = PeftModel.from_pretrained(model, "hinge/danstral-v1")
|
| 92 |
+
|
| 93 |
+
coral = load_dataset("CoRal-project/coral", "read_aloud")
|
| 94 |
+
coral = coral.cast_column("audio", Audio(sampling_rate=16000))
|
| 95 |
+
|
| 96 |
+
|
| 97 |
+
for i in range(10):
|
| 98 |
+
sample = coral["test"][i]
|
| 99 |
+
audio_data = sample['audio']
|
| 100 |
+
ground_truth = sample['text']
|
| 101 |
+
|
| 102 |
+
inputs = processor.apply_transcription_request(language="da", audio=audio_data['array'], format=["WAV"], model_id=repo_id)
|
| 103 |
+
inputs = inputs.to("cuda:0", dtype=torch.bfloat16)
|
| 104 |
+
|
| 105 |
+
outputs = model.generate(**inputs, max_new_tokens=256,do_sample=False)
|
| 106 |
+
decoded_outputs = processor.batch_decode(outputs[:, inputs.input_ids.shape[1]:], skip_special_tokens=True)
|
| 107 |
+
|
| 108 |
+
print(f"Ground Truth: {ground_truth}")
|
| 109 |
+
print(f"Prediction: {decoded_outputs[0]}")
|
| 110 |
+
print("-" * 40)
|
| 111 |
+
|
| 112 |
+
```
|