iliasslasri commited on
Commit
93406a2
·
verified ·
1 Parent(s): a677f1a

add readme

Browse files
Files changed (1) hide show
  1. README.md +39 -0
README.md ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ task_categories:
4
+ - text-generation
5
+ language:
6
+ - en
7
+ tags:
8
+ - moE
9
+ - olmoe
10
+ - pretraining
11
+ - allenai
12
+ pretty_name: Tokenized OLMoE Mix
13
+ size_categories:
14
+ - 1B<n<10B
15
+ ---
16
+
17
+ # Dataset Card for Tokenized OLMoE Mix
18
+
19
+ ## Dataset Summary
20
+ This dataset contains pre-tokenized training and evaluation data designed for training custom small-scale **OLMoE (Mixture-of-Experts)** models.
21
+
22
+ The data is sourced primarily from the official AI2 Dolma 1 and C4 datasets and was curated to run ablation studies and reproduction experiments related to the [OLMoE Technical Paper (arXiv:2409.02060)](https://arxiv.org/abs/2409.02060). It is provided in `.npy` format, having been pre-tokenized using the `allenai/gpt-neox-olmo-dolma-v1_5` tokenizer.
23
+
24
+ ## Dataset Structure
25
+ The dataset currently totals **7.38 GB** and is split into three parts for easier handling:
26
+ * `part-0-00000.npy` (2.51 GB)
27
+ * `part-0-00001.npy` (4.29 GB)
28
+ * `part-0-00002.npy` (574 MB)
29
+
30
+ ## Data Sources & Composition
31
+ Our training mix consists of approximately **4.7 Billion tokens** in total, built from the following sources:
32
+
33
+ ### 1. Training Data (3.689B tokens)
34
+ * **Source:** A Wikipedia subset from Dolma 1.
35
+ * **Original HF Dataset:** [`allenai/OLMoE-mix-0924`](https://huggingface.co/datasets/allenai/OLMoE-mix-0924)
36
+ * **Command used to fetch raw data:**
37
+ ```bash
38
+ wget -O data/wiki-001.json.gz "[https://huggingface.co/datasets/allenai/OLMoE-mix-0924/resolve/main/data/wiki/wiki-0001.json.gz?download=true](https://huggingface.co/datasets/allenai/OLMoE-mix-0924/resolve/main/data/wiki/wiki-0001.json.gz?download=true)"
39
+ ```