LiteASR-ONNX release_dll bundle

This folder is generated by make_release_dll.ps1 and is prepared for distribution/use in Hugging Face style repositories.

It bundles:

  • Rust FFI DLL (bin/liteasr_ffi.dll)
  • C ABI header (include/liteasr_ffi.h)
  • Python FFI + ONNXRuntime client (ffi_python/)
  • ONNX/tokenizer assets (models/)
  • third-party dependency notes (thirdparty_licence.md)

Included models

  • (none copied)

Folder layout

  • bin/liteasr_ffi.dll: Rust FFI runtime library
  • include/liteasr_ffi.h: C ABI header for native bindings
  • ffi_python/liteasr_ffi.py: ctypes wrapper for DLL
  • ffi_python/onnx_transcribe_ffi.py: ONNXRuntime pipeline using DLL helpers
  • models/<model_name>/onnx/: ONNX files (encoder/decoder/decoder_with_past)
  • models/<model_name>/tokenizer/: tokenizer/config JSON files
  • thirdparty_licence.md: dependency license inventory

Quick start (Python)

cd ffi_python
uv run python onnx_transcribe_ffi.py --dll-path ..\bin\liteasr_ffi.dll --model-name efficient-speech__lite-whisper-large-v3-turbo-acc --audio ..\..\samples\a01.wav --out-dir ..\artifacts_a01

Regenerate bundle

All models:

./make_release_dll.ps1 -Clean -IncludeAllModels

Selected models:

./make_release_dll.ps1 -Clean -ModelNames efficient-speech__lite-whisper-large-v3-turbo-acc

Notes

  • Some ONNX exports may require .onnx.data sidecar files; these are copied when present.
  • Model weights/tokenizer terms are governed by the original model publishers.
  • thirdparty_licence.md is for engineering reference, not legal advice.
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support