whisper-small - LiteRT

This is a LiteRT (formerly TensorFlow Lite) conversion of openai/whisper-small for efficient on-device inference.

Model Details

Property Value
Original Model openai/whisper-small
Format LiteRT (.tflite)
File Size 336.5 MB
Task Speech Recognition (Encoder Only)
Max Sequence Length 3000
Output Dimension 768
Pooling Mode N/A (Encoder output)

Performance

Benchmarked on AMD CPU (WSL2):

Metric Value
Inference Latency 1293.6 ms
Throughput 0.8/sec
Cosine Similarity vs Original 1.0000 ✅

Quick Start

import numpy as np
from ai_edge_litert.interpreter import Interpreter
from transformers import WhisperProcessor
import librosa

# Load model
interpreter = Interpreter(model_path="openai_whisper-small_encoder.tflite")
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()

# Load processor
processor = WhisperProcessor.from_pretrained("openai/whisper-small")

def encode_audio(audio_path: str) -> np.ndarray:
    """Extract encoder features from audio file."""
    audio, sr = librosa.load(audio_path, sr=16000)
    input_features = processor(audio, sampling_rate=16000, return_tensors="np").input_features

    interpreter.set_tensor(input_details[0]["index"], input_features.astype(np.float32))
    interpreter.invoke()

    return interpreter.get_tensor(output_details[0]["index"])

# Example
# features = encode_audio("audio.wav")

Note: This is the encoder-only model. For full ASR, you need the decoder as well.

Files

  • openai_whisper-small_encoder.tflite - The LiteRT model file

Conversion Details

  • Conversion Tool: ai-edge-torch
  • Conversion Date: 2026-01-12
  • Source Framework: PyTorch → LiteRT
  • Validation: Cosine similarity 1.0000 vs original

Intended Use

  • Mobile Applications: On-device semantic search, RAG systems
  • Edge Devices: IoT, embedded systems, Raspberry Pi
  • Offline Processing: Privacy-preserving inference
  • Low-latency Applications: Real-time processing

Limitations

  • Fixed sequence length (3000 tokens)
  • CPU inference (GPU delegate requires setup)
  • Tokenizer loaded separately from original model
  • Float32 precision

License

This model inherits the license from the original:

Citation

@misc{radford2022whisper,
    title={Robust Speech Recognition via Large-Scale Weak Supervision},
    author={Alec Radford and Jong Wook Kim and others},
    year={2022},
    eprint={2212.04356},
    archivePrefix={arXiv},
}

Acknowledgments


Converted by Bombek1

Downloads last month
4
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Bombek1/whisper-small-encoder-litert

Finetuned
(3171)
this model

Paper for Bombek1/whisper-small-encoder-litert