|
|
--- |
|
|
license: apache-2.0 |
|
|
--- |
|
|
# Model Card |
|
|
|
|
|
## Model Description |
|
|
|
|
|
This is in a fine-tuned series of [OpenAI's Whisper models](https://github.com/openai/whisper). |
|
|
|
|
|
The models have been finetuned for dynamic audio context robustness, allowing shorter audio contexts for better performance with short audio inputs. The method is detailed [in our GitHub repo](https://github.com/futo-org/whisper-acft). |
|
|
|
|
|
- **Developed by:** FUTO |
|
|
- **License:** Apache-2.0 |
|
|
- **Finetuned from model:** OpenAI Whisper |
|
|
|
|
|
## Uses |
|
|
|
|
|
These models are not useful by themselves under default Whisper runtime configurations. |
|
|
|
|
|
The easiest way to test differing audio context is to use whisper.cpp with the `--audio-context` parameter. We provide converted whisper.cpp models in our [GitHub README](https://github.com/futo-org/whisper-acft?tab=readme-ov-file#finetuning-whisper-for-dynamic-audio-context-robustness). |
|
|
|
|
|
## Other Information |
|
|
|
|
|
More information can be found in our [GitHub README](https://github.com/futo-org/whisper-acft?tab=readme-ov-file#finetuning-whisper-for-dynamic-audio-context-robustness). |