Jordan Legg commited on
Commit
7090536
·
1 Parent(s): 03b6091

feat: tokenized details

Browse files
Files changed (2) hide show
  1. README.md +8 -1
  2. scripts/tokenize/main.py +13 -0
README.md CHANGED
@@ -27,4 +27,11 @@ size_categories:
27
 
28
  From the Frontier Research Team at **Takara.ai**, we present **MicroPajama** — a dataset made for distillation and feature extraction without wikipedia from the larger SlimPajama.
29
 
30
- ---
 
 
 
 
 
 
 
 
27
 
28
  From the Frontier Research Team at **Takara.ai**, we present **MicroPajama** — a dataset made for distillation and feature extraction without wikipedia from the larger SlimPajama.
29
 
30
+ ---
31
+
32
+ This dataset contains **253,636,240* tokens using the BAAI/bge-large-en-v1.5 wordpiece tokenizer, you can reproduce this result in scripts/tokenize/main.py.
33
+
34
+ ---
35
+ For research inquiries and press, please reach out to research@takara.ai
36
+
37
+ > 人類を変革する
scripts/tokenize/main.py ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from datasets import load_dataset
2
+ from transformers import AutoTokenizer
3
+
4
+
5
+ def main():
6
+ ds = load_dataset("takara-ai/micropajama", split="train")
7
+ tok = AutoTokenizer.from_pretrained("BAAI/bge-large-en-v1.5")
8
+ lens = ds.map(lambda b: {"len": [len(x) for x in tok(b["text"], add_special_tokens=False).input_ids]}, batched=True, remove_columns=ds.column_names)
9
+ print(sum(lens["len"]))
10
+
11
+
12
+ if __name__ == "__main__":
13
+ main()