Update README.md
Browse files
README.md
CHANGED
|
@@ -9,4 +9,19 @@ size_categories:
|
|
| 9 |
- 100M<n<1B
|
| 10 |
---
|
| 11 |
|
| 12 |
-
#
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 9 |
- 100M<n<1B
|
| 10 |
---
|
| 11 |
|
| 12 |
+
# Tokenizer
|
| 13 |
+
|
| 14 |
+
This is a tokeniser created on a custom-written algorithm on a huge vocabulary of `~1B` tokens. These tokens are given in the files (such that they are `<2GB` each, making them trackable by Git LFS). The text corpus is from the `SlimPajama` dataset by cerebras and consists of the whole text and validation corpus.
|
| 15 |
+
The final tokeniser is available in two versions (`0.5B` version - Val. data only and `1B` version - Val data + Test data, created using the same algo).
|
| 16 |
+
The files includes the token counts, the text corpus used, individual lines/paras from SlimPajama as a list JSON, ordered tokeniser with token ids (in order of their counts), unordered tokeniser with token ids.
|
| 17 |
+
|
| 18 |
+
## To do:
|
| 19 |
+
- Write custom code for the final tokenisation part (to break text into tokens)
|
| 20 |
+
- Create a python library for using the tokeniser
|
| 21 |
+
- Put the files on GitHub for a general overview
|
| 22 |
+
- Release the experimentation notebook and the tokenisation code (as part of the library)
|
| 23 |
+
- Make a writeup to explain the algo used
|
| 24 |
+
|
| 25 |
+
`Note: algo used is no industry standard algo like Byte Pair encoding, infact i have not studied BPE yet, i wanted to create a tokeniser from scratch first without any idea of what is currently used, and then compare the two, so a lot of what i implement may have similarities to that in industry but may not provide some performance improvement`
|
| 26 |
+
|
| 27 |
+
I am storing it on HF, not on GITHUB, because of storage issues, and due to easy availability for the AI community.
|