Update README.md
Browse files
README.md
CHANGED
|
@@ -3,7 +3,7 @@
|
|
| 3 |
- This tokenizer is a PreTrainedTokenizerFast which is trained on raygx/Nepali-Extended-Corpus datasets.
|
| 4 |
- This tokenizer is trained from scratch using Tokenizers library.
|
| 5 |
- This tokenizer uses
|
| 6 |
-
- Model:
|
| 7 |
- Normalizer: normalizers.Sequence([NFD(),Strip()])
|
| 8 |
- Pre-processor: pre_tokenizers.Sequence([Whitespace(),Digits(individual_digits=True), Punctuation()])
|
| 9 |
- Post-processor: BertProcessing
|
|
|
|
| 3 |
- This tokenizer is a PreTrainedTokenizerFast which is trained on raygx/Nepali-Extended-Corpus datasets.
|
| 4 |
- This tokenizer is trained from scratch using Tokenizers library.
|
| 5 |
- This tokenizer uses
|
| 6 |
+
- Model: Tokenizer(WordPiece(unk_token="[UNK]"))
|
| 7 |
- Normalizer: normalizers.Sequence([NFD(),Strip()])
|
| 8 |
- Pre-processor: pre_tokenizers.Sequence([Whitespace(),Digits(individual_digits=True), Punctuation()])
|
| 9 |
- Post-processor: BertProcessing
|