File size: 1,505 Bytes
9a6679d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
---
license: odc-by
task_categories:
- text-generation
language:
- en
tags:
- rp
- 100M
- parquet
- redpajama
- reference-reproduction
- benchmark-subset
- open-pretraining-data
- reproducible-dataset
- data-slicing
size_categories:
- 100M<n<1B
---

# RedPajama-Data-V2-100M

## Dataset Description
This is a **100.0 Million token** subset of [krisbailey/RedPajama-Data-V2-1B](https://huggingface.co/datasets/krisbailey/RedPajama-Data-V2-1B), which is a subset of [togethercomputer/RedPajama-Data-V2](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-V2).

## Motivation
100M tokens is a standard size for:
- **CI/CD Pipelines:** Fast enough to download and train for unit tests.
- **Debugging:** Verifying training loops without waiting for hours.
- **Scaling Laws:** The first step in a logarithmic scaling series (100M -> 1B -> 10B).

## Dataset Details
- **Total Tokens:** 99,999,721
- **Source:** krisbailey/RedPajama-Data-V2-1B
- **Structure:** First ~10% of the randomized 1B dataset.
- **Format:** Parquet (Snappy compression) - Single File
- **Producer:** Kris Bailey (kris@krisbailey.com)

## Usage

```python
from datasets import load_dataset

ds = load_dataset("krisbailey/RedPajama-Data-V2-100M", split="train")
print(ds[0])
```

## Citation
```bibtex
@article{together2023redpajama,
  title={RedPajama: An Open Source Recipe to Reproduce LLaMA training dataset},
  author={Together Computer},
  journal={https://github.com/togethercomputer/RedPajama-Data},
  year={2023}
}
```