Whisper Small (ONNX)
Production-ready ONNX conversion of openai/whisper-small for in-browser multilingual speech recognition — zero server cost, zero latency, complete privacy.
Highlights
- Multilingual ASR — supports 99 languages
- 244M parameters — lightweight enough for browser and mobile inference
- transformers.js compatible — drop-in
pipeline('automatic-speech-recognition') - Versatile — transcription, translation, and language detection
Quick Start
import { pipeline } from '@huggingface/transformers';
const transcriber = await pipeline(
'automatic-speech-recognition',
'affectively-ai/whisper-small-onnx'
);
const result = await transcriber(audioBlob);
// { text: 'Hello, how are you feeling today?' }
Model Details
| Property | Value |
|---|---|
| Base model | openai/whisper-small |
| Parameters | 244M |
| Languages | 99 |
| Input | 16 kHz audio |
| Tasks | Transcription, translation, language ID |
| License | Apache 2.0 |
Use Cases
This model powers lightweight speech recognition in Edgework.ai — bringing fast, cheap, and private inference as close to the user as possible. Best for:
- Voice-to-text in multilingual emotion journaling
- Lightweight ASR where Parakeet is too large
- Language detection before routing to translation
- Quick voice notes and transcription
Related Models
| Model | Parameters | Languages | Use case |
|---|---|---|---|
| whisper-small-onnx | 244M | 99 | Lightweight multilingual |
| whisper-large-v3-onnx | 1.5B | 99+ | Best multilingual quality |
| whisper-large-v3-turbo-onnx | 809M | 99+ | Fast, high-quality |
| parakeet-ctc-0.6b-onnx | 0.6B | English | Best English-only ASR |
About
Published by AFFECTIVELY · Managed by @buley
We convert, quantize, and publish production-ready ONNX models for edge and in-browser inference. Every release is tested for correctness and stability before publication.
- All models · GitHub · Edgework.ai
- Downloads last month
- 23
Model tree for affectively-ai/whisper-small-onnx
Base model
openai/whisper-small