Configuration Parsing Warning:In adapter_config.json: "peft.task_type" must be a string

Surt โ€” Whisper Small Pilot Phase 1 (Text LM Priming)

LoRA adapter for openai/whisper-small, trained on Gurbani text-only data.

Training Details

  • Phase: 1 โ€” Text LM Priming (decoder self-attention only)
  • LoRA targets: decoder self-attn Q, K, V projections (rank 16)
  • Data: Gurbani text corpus from surindersinghssj/gurbani-asr-text
  • Input: Dummy mel spectrograms (text-only training, no audio)
  • Steps: 1,000 | Batch size: 32 | LR: 1e-4 (cosine decay)

Results

Metric Value
Final train loss 2.42
Eval loss 1.23
Train loss (avg) 2.894

Purpose

Phase 1 primes the decoder to produce Gurmukhi text sequences before introducing audio. This checkpoint serves as the starting point for Phase 2 (audio fine-tuning).

W&B Run

surt-pilot/qnssfjr9

Downloads last month
81
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for surindersinghssj/surt-whisper-small-pilot-phase1

Adapter
(216)
this model

Dataset used to train surindersinghssj/surt-whisper-small-pilot-phase1