Configuration Parsing Warning:In adapter_config.json: "peft.task_type" must be a string
Surt โ Whisper Small Pilot Phase 1 (Text LM Priming)
LoRA adapter for openai/whisper-small, trained on Gurbani text-only data.
Training Details
- Phase: 1 โ Text LM Priming (decoder self-attention only)
- LoRA targets: decoder self-attn Q, K, V projections (rank 16)
- Data: Gurbani text corpus from
surindersinghssj/gurbani-asr-text - Input: Dummy mel spectrograms (text-only training, no audio)
- Steps: 1,000 | Batch size: 32 | LR: 1e-4 (cosine decay)
Results
| Metric | Value |
|---|---|
| Final train loss | 2.42 |
| Eval loss | 1.23 |
| Train loss (avg) | 2.894 |
Purpose
Phase 1 primes the decoder to produce Gurmukhi text sequences before introducing audio. This checkpoint serves as the starting point for Phase 2 (audio fine-tuning).
W&B Run
- Downloads last month
- 81
Model tree for surindersinghssj/surt-whisper-small-pilot-phase1
Base model
openai/whisper-small