cohere-asr-mlx / README.md
tamarher's picture
Sync model card
a633736 verified
metadata
language:
  - en
license: apache-2.0
library_name: mlx
pipeline_tag: automatic-speech-recognition
base_model: CohereLabs/cohere-transcribe-03-2026
base_model_relation: quantized
tags:
  - mlx
  - asr
  - speech-recognition
  - transcription
  - apple-silicon
  - quantized
  - 8bit

Cohere Transcribe — MLX

This repository contains an MLX-native int8 conversion of Cohere Transcribe for local automatic speech recognition on Apple Silicon.

It is intended for local transcription with mlx-speech, without a PyTorch runtime or cloud API dependency at inference time.

Variants

Path Precision
mlx-int8/ int8 quantized weights

Model Details

  • Developed by: AppAutomaton
  • Shared by: AppAutomaton on Hugging Face
  • Upstream model: cohere-transcribe-03-2026
  • Task: automatic speech recognition
  • Runtime: MLX on Apple Silicon

How to Get Started

Command-line transcription with mlx-speech:

python scripts/generate/cohere_asr.py \
  --audio input.wav \
  --output transcript.txt

Minimal Python usage:

import numpy as np
import soundfile as sf

from mlx_speech.generation import CohereAsrModel

audio, sr = sf.read("input.wav", dtype="float32", always_2d=False)
if audio.ndim > 1:
    audio = audio.mean(axis=1)
if sr != 16000:
    old_len = len(audio)
    new_len = int(round(old_len * 16000 / sr))
    audio = np.interp(np.linspace(0, old_len - 1, new_len), np.arange(old_len), audio).astype(np.float32)

model = CohereAsrModel.from_path("mlx-int8")
result = model.transcribe(audio, sample_rate=16000, language="en")
print(result.text)

Notes

  • This repo contains the quantized MLX runtime artifact only.
  • The conversion keeps the original encoder-decoder ASR architecture and remaps weights explicitly for MLX inference.
  • The example above resamples to 16 kHz before calling transcribe(), which matches the runtime requirement.

Links

License

Apache 2.0 — following the upstream Cohere Transcribe model license. Check the original Cohere release for current terms.