Update README.md
Browse files
README.md
CHANGED
|
@@ -36,43 +36,46 @@ model-index:
|
|
| 36 |
name: WER
|
| 37 |
---
|
| 38 |
|
|
|
|
| 39 |
|
| 40 |
-
|
| 41 |
|
| 42 |
-
|
| 43 |
|
| 44 |
## Evaluation Results
|
| 45 |
|
| 46 |
-
| Model | Number of parameters | [
|
| 47 |
|:---|---:|---:|---:|
|
| 48 |
-
| [hinge/danstral-v1](https://huggingface.co/hinge/danstral-v1) | 24B | **4.2% ± 0.2%** | **9.7% ± 0.3%** |
|
| 49 |
-
| [Alvenir/coral-1-whisper-large](https://huggingface.co/Alvenir/coral-1-whisper-large) | 1.540B | 4.3% ± 0.2% | 10.4% ± 0.3% |
|
| 50 |
-
| [alexandrainst/roest-315m](https://huggingface.co/alexandrainst/roest-315m) | 0.315B | 6.6% ± 0.2% | 17.0% ± 0.4% |
|
| 51 |
| [mhenrichsen/hviske-v2](https://huggingface.co/syvai/hviske-v2) | 1.540B | 4.7% ± 0.07% | 11.8% ± 0.3% |
|
| 52 |
| [openai/whisper-large-v3](https://hf.co/openai/whisper-large-v3) | 1.540B | 11.4% ± 0.3% | 28.3% ± 0.6% |
|
| 53 |
|
|
|
|
| 54 |
|
| 55 |
## Limitations
|
| 56 |
-
|
| 57 |
-
- is
|
| 58 |
-
-
|
| 59 |
-
- is finetuned solely on the coral v1 dataset and performance may deterioate significantly for other data sources.
|
| 60 |
|
| 61 |
-
|
| 62 |
-
- SOTA performance was achieved using a LoRA adapter with 25M parameters. I only conducted a few experiments, and there is likely more performance gains to be had by tweaking the LoRA configuration or by conducting a full parameter finetune.
|
| 63 |
-
- Using danstral-v1 for knowledge distillation to train smaller models
|
| 64 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 65 |
|
| 66 |
-
## How to
|
| 67 |
|
| 68 |
-
See [https://github.com/ChristianHinge/danstral](https://github.com/ChristianHinge/danstral) for training script.
|
| 69 |
|
| 70 |
```python
|
| 71 |
from transformers import VoxtralForConditionalGeneration, AutoProcessor, WhisperForConditionalGeneration
|
| 72 |
import torch
|
| 73 |
from peft import PeftModel
|
| 74 |
from datasets import load_dataset, Audio
|
| 75 |
-
import os
|
| 76 |
|
| 77 |
repo_id = "mistralai/Voxtral-Small-24B-2507"
|
| 78 |
|
|
@@ -95,23 +98,20 @@ model = PeftModel.from_pretrained(model, "hinge/danstral-v1")
|
|
| 95 |
coral = load_dataset("CoRal-project/coral", "read_aloud")
|
| 96 |
coral = coral.cast_column("audio", Audio(sampling_rate=16000))
|
| 97 |
|
| 98 |
-
|
| 99 |
for i in range(10):
|
| 100 |
sample = coral["test"][i]
|
| 101 |
audio_data = sample['audio']
|
| 102 |
ground_truth = sample['text']
|
| 103 |
-
|
| 104 |
inputs = processor.apply_transcription_request(language="da", audio=audio_data['array'], format=["WAV"], model_id=repo_id)
|
| 105 |
inputs = inputs.to("cuda:0", dtype=torch.bfloat16)
|
| 106 |
-
|
| 107 |
outputs = model.generate(**inputs, max_new_tokens=256,do_sample=False)
|
| 108 |
decoded_outputs = processor.batch_decode(outputs[:, inputs.input_ids.shape[1]:], skip_special_tokens=True)
|
| 109 |
-
|
| 110 |
print(f"Ground Truth: {ground_truth}")
|
| 111 |
-
print(f"Prediction:
|
| 112 |
print("-" * 40)
|
| 113 |
-
|
| 114 |
-
```
|
| 115 |
## Shoutouts
|
| 116 |
- Viktor Stenby Johansson and Rasmus Asgaard for ASR hackathon and ideation
|
| 117 |
- The CoRal project and Alexandra Institute for curating Danish datasets and leading the effort in Danish NLP
|
|
|
|
| 36 |
name: WER
|
| 37 |
---
|
| 38 |
|
| 39 |
+
# Voxtral-Small-24B LoRA Fine-tuned on CoRaL
|
| 40 |
|
| 41 |
+
**Danstral** is a state-of-the-art 24B parameter model for Danish automatic speech recognition (ASR). It combines the decoder and audio-adapter of [**Voxtral-Small-24B-2507**](https://huggingface.co/mistralai/Voxtral-Small-24B-2507) with the audio encoder from [**roest-whisper-large-v1**](https://huggingface.co/CoRal-project/roest-whisper-large-v1). The decoder and audio-adapter were fine-tuned using LoRA for 2 epochs (40 hours) on the Danish [CoRaL dataset](https://huggingface.co/CoRal-project/coral), using three NVIDIA L40 GPUs. While it achieves state-of-the-art performance on CoRaL, it is a massive model and likely overkill compared to Whisper-based models.
|
| 42 |
|
| 43 |
+
---
|
| 44 |
|
| 45 |
## Evaluation Results
|
| 46 |
|
| 47 |
+
| Model | Number of parameters | [CoRaL](https://huggingface.co/datasets/alexandrainst/coral/viewer/read_aloud/test) CER | [CoRaL](https://huggingface.co/datasets/alexandrainst/coral/viewer/read_aloud/test) WER |
|
| 48 |
|:---|---:|---:|---:|
|
| 49 |
+
| [hinge/danstral-v1](https://huggingface.co/hinge/danstral-v1) | 24B | **4.2% ± 0.2%** | **9.7% ± 0.3%** |
|
| 50 |
+
| [Alvenir/coral-1-whisper-large](https://huggingface.co/Alvenir/coral-1-whisper-large) | 1.540B | 4.3% ± 0.2% | 10.4% ± 0.3% |
|
| 51 |
+
| [alexandrainst/roest-315m](https://huggingface.co/alexandrainst/roest-315m) | 0.315B | 6.6% ± 0.2% | 17.0% ± 0.4% |
|
| 52 |
| [mhenrichsen/hviske-v2](https://huggingface.co/syvai/hviske-v2) | 1.540B | 4.7% ± 0.07% | 11.8% ± 0.3% |
|
| 53 |
| [openai/whisper-large-v3](https://hf.co/openai/whisper-large-v3) | 1.540B | 11.4% ± 0.3% | 28.3% ± 0.6% |
|
| 54 |
|
| 55 |
+
---
|
| 56 |
|
| 57 |
## Limitations
|
| 58 |
+
- Danstral-v1 is huge. It's 16x the size of **coral-1-whisper-large** with only modest performance improvements. However, the LoRA adapter itself is only 25 million parameters.
|
| 59 |
+
- Danstral-v1 is a fine-tuned version of **voxtral-small-24b**, whose encoder is a fine-tuned version of **mistral-small-24b**. Mistral does not disclose its training datasets, but it is likely that Danish Wikipedia articles were used. Since the CoRaL test split also contains read-aloud samples from Danish Wikipedia, there is a risk of data leakage, which could influence the test scores.
|
| 60 |
+
- The model was fine-tuned solely on the CoRaL v1 dataset, so performance may deteriorate for other data sources.
|
|
|
|
| 61 |
|
| 62 |
+
---
|
|
|
|
|
|
|
| 63 |
|
| 64 |
+
## Future Work and Ideas
|
| 65 |
+
- **Further optimization.** The state-of-the-art performance was achieved with a 25M parameter LoRA adapter. I only conducted a few experiments, and there are likely more performance gains to be had by tweaking the LoRA configuration or by conducting a full parameter fine-tune.
|
| 66 |
+
- **Knowledge distillation.** Danstral-v1 can be used for knowledge distillation to train smaller models.
|
| 67 |
+
|
| 68 |
+
---
|
| 69 |
|
| 70 |
+
## How to Use
|
| 71 |
|
| 72 |
+
See [https://github.com/ChristianHinge/danstral](https://github.com/ChristianHinge/danstral) for the training script.
|
| 73 |
|
| 74 |
```python
|
| 75 |
from transformers import VoxtralForConditionalGeneration, AutoProcessor, WhisperForConditionalGeneration
|
| 76 |
import torch
|
| 77 |
from peft import PeftModel
|
| 78 |
from datasets import load_dataset, Audio
|
|
|
|
| 79 |
|
| 80 |
repo_id = "mistralai/Voxtral-Small-24B-2507"
|
| 81 |
|
|
|
|
| 98 |
coral = load_dataset("CoRal-project/coral", "read_aloud")
|
| 99 |
coral = coral.cast_column("audio", Audio(sampling_rate=16000))
|
| 100 |
|
|
|
|
| 101 |
for i in range(10):
|
| 102 |
sample = coral["test"][i]
|
| 103 |
audio_data = sample['audio']
|
| 104 |
ground_truth = sample['text']
|
| 105 |
+
|
| 106 |
inputs = processor.apply_transcription_request(language="da", audio=audio_data['array'], format=["WAV"], model_id=repo_id)
|
| 107 |
inputs = inputs.to("cuda:0", dtype=torch.bfloat16)
|
| 108 |
+
|
| 109 |
outputs = model.generate(**inputs, max_new_tokens=256,do_sample=False)
|
| 110 |
decoded_outputs = processor.batch_decode(outputs[:, inputs.input_ids.shape[1]:], skip_special_tokens=True)
|
| 111 |
+
|
| 112 |
print(f"Ground Truth: {ground_truth}")
|
| 113 |
+
print(f"Prediction: {decoded_outputs[0]}")
|
| 114 |
print("-" * 40)
|
|
|
|
|
|
|
| 115 |
## Shoutouts
|
| 116 |
- Viktor Stenby Johansson and Rasmus Asgaard for ASR hackathon and ideation
|
| 117 |
- The CoRal project and Alexandra Institute for curating Danish datasets and leading the effort in Danish NLP
|