krisbailey commited on
Commit
9a6679d
·
verified ·
1 Parent(s): 64ec6a5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +56 -50
README.md CHANGED
@@ -1,50 +1,56 @@
1
- ---
2
- license: odc-by
3
- task_categories:
4
- - text-generation
5
- language:
6
- - en
7
- tags:
8
- - rp
9
- - 100M
10
- - parquet
11
- size_categories:
12
- - 100M<n<1B
13
- ---
14
-
15
- # RedPajama-Data-V2-100M
16
-
17
- ## Dataset Description
18
- This is a **100.0 Million token** subset of [krisbailey/RedPajama-Data-V2-1B](https://huggingface.co/datasets/krisbailey/RedPajama-Data-V2-1B), which is a subset of [togethercomputer/RedPajama-Data-V2](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-V2).
19
-
20
- ## Motivation
21
- 100M tokens is a standard size for:
22
- - **CI/CD Pipelines:** Fast enough to download and train for unit tests.
23
- - **Debugging:** Verifying training loops without waiting for hours.
24
- - **Scaling Laws:** The first step in a logarithmic scaling series (100M -> 1B -> 10B).
25
-
26
- ## Dataset Details
27
- - **Total Tokens:** 99,999,721
28
- - **Source:** krisbailey/RedPajama-Data-V2-1B
29
- - **Structure:** First ~10% of the randomized 1B dataset.
30
- - **Format:** Parquet (Snappy compression) - Single File
31
- - **Producer:** Kris Bailey (kris@krisbailey.com)
32
-
33
- ## Usage
34
-
35
- ```python
36
- from datasets import load_dataset
37
-
38
- ds = load_dataset("krisbailey/RedPajama-Data-V2-100M", split="train")
39
- print(ds[0])
40
- ```
41
-
42
- ## Citation
43
- ```bibtex
44
- @article{together2023redpajama,
45
- title={RedPajama: An Open Source Recipe to Reproduce LLaMA training dataset},
46
- author={Together Computer},
47
- journal={https://github.com/togethercomputer/RedPajama-Data},
48
- year={2023}
49
- }
50
- ```
 
 
 
 
 
 
 
1
+ ---
2
+ license: odc-by
3
+ task_categories:
4
+ - text-generation
5
+ language:
6
+ - en
7
+ tags:
8
+ - rp
9
+ - 100M
10
+ - parquet
11
+ - redpajama
12
+ - reference-reproduction
13
+ - benchmark-subset
14
+ - open-pretraining-data
15
+ - reproducible-dataset
16
+ - data-slicing
17
+ size_categories:
18
+ - 100M<n<1B
19
+ ---
20
+
21
+ # RedPajama-Data-V2-100M
22
+
23
+ ## Dataset Description
24
+ This is a **100.0 Million token** subset of [krisbailey/RedPajama-Data-V2-1B](https://huggingface.co/datasets/krisbailey/RedPajama-Data-V2-1B), which is a subset of [togethercomputer/RedPajama-Data-V2](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-V2).
25
+
26
+ ## Motivation
27
+ 100M tokens is a standard size for:
28
+ - **CI/CD Pipelines:** Fast enough to download and train for unit tests.
29
+ - **Debugging:** Verifying training loops without waiting for hours.
30
+ - **Scaling Laws:** The first step in a logarithmic scaling series (100M -> 1B -> 10B).
31
+
32
+ ## Dataset Details
33
+ - **Total Tokens:** 99,999,721
34
+ - **Source:** krisbailey/RedPajama-Data-V2-1B
35
+ - **Structure:** First ~10% of the randomized 1B dataset.
36
+ - **Format:** Parquet (Snappy compression) - Single File
37
+ - **Producer:** Kris Bailey (kris@krisbailey.com)
38
+
39
+ ## Usage
40
+
41
+ ```python
42
+ from datasets import load_dataset
43
+
44
+ ds = load_dataset("krisbailey/RedPajama-Data-V2-100M", split="train")
45
+ print(ds[0])
46
+ ```
47
+
48
+ ## Citation
49
+ ```bibtex
50
+ @article{together2023redpajama,
51
+ title={RedPajama: An Open Source Recipe to Reproduce LLaMA training dataset},
52
+ author={Together Computer},
53
+ journal={https://github.com/togethercomputer/RedPajama-Data},
54
+ year={2023}
55
+ }
56
+ ```