Whisper medium-pa model for CTranslate2

This repository contains the conversion of aipanjab/whisper-medium-pa to the CTranslate2 model format.

This model can be used in CTranslate2 or projects based on CTranslate2 such as faster-whisper.

Example

from faster_whisper import WhisperModel

# device = "cpu"
# compute_type = "int8"

device = "cuda"
compute_type = "float16"

model = WhisperModel("aipanjab/faster-whisper-medium-pa", device=device, compute_type=compute_type)

segments, info = model.transcribe("audio.mp3")
for segment in segments:
    print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))

More information

For more information about the original model, see its model card.

Downloads last month
7
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for aipanjab/faster-whisper-medium-pa

Finetuned
(1)
this model