Upload folder using huggingface_hub
Browse files- README.md +3 -1
- merges.txt +0 -0
- tokenizer.json +0 -0
- vocab.json +0 -0
README.md
CHANGED
|
@@ -21,7 +21,9 @@ A **Byte-Level BPE** tokenizer trained on **ind_Latn** data from Fineweb-2-HQ.
|
|
| 21 |
| Language | `ind_Latn` |
|
| 22 |
| Target Vocab Size | 8,000 |
|
| 23 |
| Final Vocab Size | 8,000 |
|
| 24 |
-
| Pre-tokenizer |
|
|
|
|
|
|
|
| 25 |
| Normalizer | NFC |
|
| 26 |
| Special Tokens | `<s>`, `</s>`, `<pad>`, `<unk>` |
|
| 27 |
| Training Shards | 2 |
|
|
|
|
| 21 |
| Language | `ind_Latn` |
|
| 22 |
| Target Vocab Size | 8,000 |
|
| 23 |
| Final Vocab Size | 8,000 |
|
| 24 |
+
| Pre-tokenizer | gpt4 |
|
| 25 |
+
| Number handling | individual |
|
| 26 |
+
| Contraction handling | True |
|
| 27 |
| Normalizer | NFC |
|
| 28 |
| Special Tokens | `<s>`, `</s>`, `<pad>`, `<unk>` |
|
| 29 |
| Training Shards | 2 |
|
merges.txt
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
tokenizer.json
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
vocab.json
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|