ur_int8_ct2

CTranslate2 int8 conversion for use with faster-whisper.

Usage

from faster_whisper import WhisperModel

model = WhisperModel("mahwizzzz/ur_int8_ct2", device="cuda", compute_type="int8")
segments, info = model.transcribe("audio.mp3", language="ur")
print(" ".join([s.text for s in segments]))

Note: This model requires 128 mel bins. The preprocessor_config.json in this repo is correctly set to feature_size=128. No manual overrides needed.

Conversion details

  • Format: CTranslate2 int8
  • Mel bins: 128 (whisper-large-v3 base)
  • Quantization: int8

Example output

Input: Urdu speech Output: ہیلو السلام علیکم آپ کیا چیتے ہیں مجھے اپنے بارے میں کچھ بتائیں

Downloads last month
28
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support