lilfugu-experimental-8bit

8-bit quantized version of lilfugu-experimental. See the main model card for details.

Usage

pip install -U mlx-audio
from mlx_audio.stt import load

model = load("holotherapper/lilfugu-experimental-8bit")
result = model.generate("audio.wav", language="Japanese")
print(result.text)
Downloads last month
4
MLX
Hardware compatibility
Log In to add your hardware

Quantized

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for holotherapper/lilfugu-experimental-8bit

Finetuned
(52)
this model