STT/ASR - onnx
Collection
OVOS STT/ASR models suitable for the onnx-asr inference library (ONNX runtime)
•
5 items
•
Updated
This model is an ONNX-format export of the model available at projecte-aina/whisper-large-v3-tiny-caesar, for ease of use in edge devices and CPU-based inference environments.
The export is based on:
The requirements can be installed as
$ pip install optimum[onnxruntime] onnx-asr
import onnx_asr
model = onnx_asr.load_model("OpenVoiceOS/whisper-large-v3-tiny-caesar-onnx")
print(model.recognize("test.wav"))
According to onnx-asr/convert-model-to-onnx):
$ optimum-cli export onnx --task automatic-speech-recognition-with-past --model projecte-aina/whisper-large-v3-tiny-caesar whisper-onnx
$ cd whisper-onnx && rm decoder.onnx* decoder_with_past_model.onnx* # only the merged decoder is needed
The license is derived from the original model: Apache 2.0. For more details, please refer to projecte-aina/whisper-large-v3-tiny-caesar.