DevMehdip commited on
Commit
ac1fb3f
·
verified ·
1 Parent(s): cfa51b3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +20 -42
README.md CHANGED
@@ -1,23 +1,23 @@
1
- ---
2
- base_model: openai/whisper-small
3
- library_name: peft
4
- tags:
5
- - whisper
6
- - lora
7
- - transformers
8
- - speech-to-text
9
- - persian
10
- - stt
11
- - fine-tune
12
- - adapter
13
- datasets:
14
- - vhdm/persian-voice-v1
15
- language:
16
- - fa
17
- metrics:
18
- - cer
19
- - wer
20
- ---
21
 
22
  # Whisper-Small Persian STT — LoRA Fine-Tuned
23
 
@@ -77,25 +77,3 @@ Users should:
77
  - Provide clean 16kHz mono WAV audio
78
  - Use domain-specific fine-tuning if necessary
79
  - Validate outputs before critical use
80
-
81
- ---
82
-
83
- ## How to Get Started
84
-
85
- ```python
86
- from transformers import WhisperProcessor, WhisperForConditionalGeneration
87
- from peft import PeftModel
88
-
89
- processor = WhisperProcessor.from_pretrained("Mehdipoladrag/REPO_NAME")
90
- model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-small")
91
-
92
- # Load LoRA adapter
93
- model = PeftModel.from_pretrained(model, "Mehdipoladrag/REPO_NAME")
94
-
95
- audio = ... # your 16kHz mono waveform
96
-
97
- inputs = processor(audio, sampling_rate=16000, return_tensors="pt")
98
- pred_ids = model.generate(inputs["input_features"])
99
- text = processor.batch_decode(pred_ids, skip_special_tokens=True)[0]
100
-
101
- print(text)
 
1
+ ---
2
+ base_model: openai/whisper-small
3
+ library_name: peft
4
+ tags:
5
+ - whisper
6
+ - lora
7
+ - transformers
8
+ - speech-to-text
9
+ - persian
10
+ - stt
11
+ - fine-tune
12
+ - adapter
13
+ datasets:
14
+ - vhdm/persian-voice-v1
15
+ language:
16
+ - fa
17
+ metrics:
18
+ - cer
19
+ - wer
20
+ ---
21
 
22
  # Whisper-Small Persian STT — LoRA Fine-Tuned
23
 
 
77
  - Provide clean 16kHz mono WAV audio
78
  - Use domain-specific fine-tuning if necessary
79
  - Validate outputs before critical use