Tasmay-Tib commited on
Commit
4f00213
·
verified ·
1 Parent(s): a4221ee

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -0
README.md CHANGED
@@ -15,6 +15,8 @@ This is a tokeniser created on a custom-written algorithm on a huge vocabulary o
15
  The final tokeniser is available in two versions (`0.5B` version - Val. data only and `1B` version - Val data + Test data, created using the same algo).
16
  The files includes the token counts, the text corpus used, individual lines/paras from SlimPajama as a list JSON, ordered tokeniser with token ids (in order of their counts), unordered tokeniser with token ids.
17
 
 
 
18
  ## To do:
19
  - Write custom code for the final tokenisation part (to break text into tokens)
20
  - Create a python library for using the tokeniser
 
15
  The final tokeniser is available in two versions (`0.5B` version - Val. data only and `1B` version - Val data + Test data, created using the same algo).
16
  The files includes the token counts, the text corpus used, individual lines/paras from SlimPajama as a list JSON, ordered tokeniser with token ids (in order of their counts), unordered tokeniser with token ids.
17
 
18
+ The tokeniser contains 131,072 tokens.
19
+
20
  ## To do:
21
  - Write custom code for the final tokenisation part (to break text into tokens)
22
  - Create a python library for using the tokeniser