Robust Speech Recognition via Large-Scale Weak Supervision
Paper
•
2212.04356
•
Published
•
46
This is a LiteRT (formerly TensorFlow Lite) conversion of openai/whisper-small for efficient on-device inference.
| Property | Value |
|---|---|
| Original Model | openai/whisper-small |
| Format | LiteRT (.tflite) |
| File Size | 336.5 MB |
| Task | Speech Recognition (Encoder Only) |
| Max Sequence Length | 3000 |
| Output Dimension | 768 |
| Pooling Mode | N/A (Encoder output) |
Benchmarked on AMD CPU (WSL2):
| Metric | Value |
|---|---|
| Inference Latency | 1293.6 ms |
| Throughput | 0.8/sec |
| Cosine Similarity vs Original | 1.0000 ✅ |
import numpy as np
from ai_edge_litert.interpreter import Interpreter
from transformers import WhisperProcessor
import librosa
# Load model
interpreter = Interpreter(model_path="openai_whisper-small_encoder.tflite")
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
# Load processor
processor = WhisperProcessor.from_pretrained("openai/whisper-small")
def encode_audio(audio_path: str) -> np.ndarray:
"""Extract encoder features from audio file."""
audio, sr = librosa.load(audio_path, sr=16000)
input_features = processor(audio, sampling_rate=16000, return_tensors="np").input_features
interpreter.set_tensor(input_details[0]["index"], input_features.astype(np.float32))
interpreter.invoke()
return interpreter.get_tensor(output_details[0]["index"])
# Example
# features = encode_audio("audio.wav")
Note: This is the encoder-only model. For full ASR, you need the decoder as well.
openai_whisper-small_encoder.tflite - The LiteRT model fileThis model inherits the license from the original:
@misc{radford2022whisper,
title={Robust Speech Recognition via Large-Scale Weak Supervision},
author={Alec Radford and Jong Wook Kim and others},
year={2022},
eprint={2212.04356},
archivePrefix={arXiv},
}
Converted by Bombek1
Base model
openai/whisper-small