Whisper-Tiny-ExecuTorch-XNNPACK

Pre-exported ExecuTorch .pte files for Whisper Tiny with XNNPACK backend (CPU). A lightweight speech-to-text model that runs on any platform.

Installation

git clone https://github.com/pytorch/executorch/ ~/executorch
cd ~/executorch && ./install_executorch.sh
make whisper-cpu

Download

pip install huggingface_hub
huggingface-cli download younghan-meta/Whisper-Tiny-ExecuTorch-XNNPACK --local-dir ~/whisper_tiny

Run

cmake-out/examples/models/whisper/whisper_runner \
    --model_path ~/whisper_tiny/model.pte \
    --tokenizer_path ~/whisper_tiny/ \
    --processor_path ~/whisper_tiny/whisper_preprocessor.pte \
    --audio_path ~/whisper_tiny/poem.wav \
    --temperature 0

Export Commands

pip install optimum-executorch

# Model
optimum-cli export executorch \
    --model openai/whisper-tiny \
    --task automatic-speech-recognition \
    --recipe xnnpack \
    --output_dir ./whisper_tiny_xnnpack

# Preprocessor
python -m executorch.extension.audio.mel_spectrogram \
    --feature_size 80 --stack_output --max_audio_len 300 \
    --output_file ./whisper_tiny_xnnpack/whisper_preprocessor.pte

More Info

Downloads last month
7
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for younghan-meta/Whisper-Tiny-ExecuTorch-XNNPACK

Finetuned
(1812)
this model