Whisper-Tiny-ExecuTorch-XNNPACK
Pre-exported ExecuTorch .pte files
for Whisper Tiny with XNNPACK
backend (CPU). A lightweight speech-to-text model that runs on any platform.
Installation
git clone https://github.com/pytorch/executorch/ ~/executorch
cd ~/executorch && ./install_executorch.sh
make whisper-cpu
Download
pip install huggingface_hub
huggingface-cli download younghan-meta/Whisper-Tiny-ExecuTorch-XNNPACK --local-dir ~/whisper_tiny
Run
cmake-out/examples/models/whisper/whisper_runner \
--model_path ~/whisper_tiny/model.pte \
--tokenizer_path ~/whisper_tiny/ \
--processor_path ~/whisper_tiny/whisper_preprocessor.pte \
--audio_path ~/whisper_tiny/poem.wav \
--temperature 0
Export Commands
pip install optimum-executorch
# Model
optimum-cli export executorch \
--model openai/whisper-tiny \
--task automatic-speech-recognition \
--recipe xnnpack \
--output_dir ./whisper_tiny_xnnpack
# Preprocessor
python -m executorch.extension.audio.mel_spectrogram \
--feature_size 80 --stack_output --max_audio_len 300 \
--output_file ./whisper_tiny_xnnpack/whisper_preprocessor.pte
More Info
- Downloads last month
- 7
Model tree for younghan-meta/Whisper-Tiny-ExecuTorch-XNNPACK
Base model
openai/whisper-tiny