metadata
language:
- es
- ca
license: apache-2.0
OVOS - Whisper Large v3 Tiny Caesar
This model is an ONNX-format export of the model available at projecte-aina/whisper-large-v3-tiny-caesar, for ease of use in edge devices and CPU-based inference environments.
Requirements
The export is based on:
The requirements can be installed as
$ pip install optimum[onnxruntime] onnx-asr
Usage
import onnx_asr
model = onnx_asr.load_model("OpenVoiceOS/whisper-large-v3-tiny-caesar-onnx")
print(model.recognize("test.wav"))
Export
According to onnx-asr/convert-model-to-onnx):
$ optimum-cli export onnx --task automatic-speech-recognition-with-past --model projecte-aina/whisper-large-v3-tiny-caesar whisper-onnx
$ cd whisper-onnx && rm decoder.onnx* decoder_with_past_model.onnx* # only the merged decoder is needed
Licensing
The license is derived from the original model: Apache 2.0. For more details, please refer to projecte-aina/whisper-large-v3-tiny-caesar.