Update README.md
Browse files
README.md
CHANGED
|
@@ -24,18 +24,16 @@ The training recipe was based on wsj recipe in [espnet](https://github.com/espne
|
|
| 24 |
|
| 25 |
This model is Hybrid CTC/Attention model with pre-trained HuBERT as the encoder.
|
| 26 |
|
| 27 |
-
This model trained on Thai-central
|
| 28 |
|
| 29 |
-
|
| 30 |
|
| 31 |
|
| 32 |
## Evaluation
|
| 33 |
|
| 34 |
<!-- This section describes the evaluation protocols and provides the results. -->
|
| 35 |
|
| 36 |
-
For evaluation, the metrics are CER and WER.
|
| 37 |
-
|
| 38 |
-
In this reposirity, we also provide the vocabulary for building the newmm tokenizer using this script:
|
| 39 |
|
| 40 |
```python
|
| 41 |
from pythainlp import word_tokenize
|
|
|
|
| 24 |
|
| 25 |
This model is Hybrid CTC/Attention model with pre-trained HuBERT as the encoder.
|
| 26 |
|
| 27 |
+
This model was trained on Thai-central to be used as a supervised pre-trained model in order to be used for finetuning to other Thai dialects. (Experiment 2 in the paper).
|
| 28 |
|
| 29 |
+
We provide some demo code to do inference with this model on colab [here](https://colab.research.google.com/drive/1stltGdpG9OV-sCl9QgkvEXZV7fGB2Ixe?usp=sharing). (Please note that you cannot inference >4 seconds of audio with free Google colab)
|
| 30 |
|
| 31 |
|
| 32 |
## Evaluation
|
| 33 |
|
| 34 |
<!-- This section describes the evaluation protocols and provides the results. -->
|
| 35 |
|
| 36 |
+
For evaluation, the metrics are CER and WER. Before WER evaluation, transcriptions were re-tokenized using newmm tokenizer in [PyThaiNLP](https://github.com/PyThaiNLP/pythainlp)
|
|
|
|
|
|
|
| 37 |
|
| 38 |
```python
|
| 39 |
from pythainlp import word_tokenize
|