Commit
·
1da68ce
1
Parent(s):
7b528f7
Update README.md
Browse files
README.md
CHANGED
|
@@ -18,7 +18,7 @@ datasets:
|
|
| 18 |
|
| 19 |
<!-- Provide a quick summary of what the model is/does. -->
|
| 20 |
|
| 21 |
-
This model is a **fine-tuned version of Whisper-large-v2 model specifically tuned for software developers.** It transcribes words like 'ChatGPT' or 'Webhook'
|
| 22 |
|
| 23 |
## Model Details
|
| 24 |
|
|
@@ -63,7 +63,7 @@ Note that testing data can not be provided publicly due to the privacy issue.
|
|
| 63 |
Two of the most popular metrics to assess automatic speech recognition model, WER and CER, were used. <br>
|
| 64 |
Additionally, DSWES was used to specifically check the transcription accuracy of softwared-related words. Note that higher the DSWES, the better.
|
| 65 |
|
| 66 |
-
For accessment, WhisperX was used as a backbone of a fine-tuned model due to its fast inference speed and
|
| 67 |
Since backbone of WhisperX is Whisper, I can safely assume that the performace of Whisper would very much similar to that of WhisperX.
|
| 68 |
|
| 69 |
### Results
|
|
|
|
| 18 |
|
| 19 |
<!-- Provide a quick summary of what the model is/does. -->
|
| 20 |
|
| 21 |
+
This model is a **fine-tuned version of Whisper-large-v2 model specifically tuned for software developers.** It transcribes words like 'ChatGPT' or 'Webhook' correctly, which previous Whisper models could not do.
|
| 22 |
|
| 23 |
## Model Details
|
| 24 |
|
|
|
|
| 63 |
Two of the most popular metrics to assess automatic speech recognition model, WER and CER, were used. <br>
|
| 64 |
Additionally, DSWES was used to specifically check the transcription accuracy of softwared-related words. Note that higher the DSWES, the better.
|
| 65 |
|
| 66 |
+
For accessment, WhisperX was used as a backbone of a fine-tuned model due to its fast inference speed and reduced size.
|
| 67 |
Since backbone of WhisperX is Whisper, I can safely assume that the performace of Whisper would very much similar to that of WhisperX.
|
| 68 |
|
| 69 |
### Results
|