Update README.md
Browse files
README.md
CHANGED
|
@@ -1,44 +1,43 @@
|
|
| 1 |
-
Dataset Summary
|
| 2 |
-
This dataset is a 2 Billion token subset of the RedPajama-Data-1T dataset created by Together Computer.
|
| 3 |
|
| 4 |
-
It was created to provide a smaller, lightweight version of the massive 1.2 Trillion token dataset for testing, debugging, and small-scale pre-training experiments. This specific subset was streamed primarily from the C4 (Colossal Clean Crawled Corpus) partition of the original RedPajama dataset.
|
| 5 |
|
| 6 |
-
Dataset Structure
|
| 7 |
The dataset retains the original structure of RedPajama. Each row contains:
|
| 8 |
-
text: The actual content (string).
|
| 9 |
-
meta: Metadata dict containing url, timestamp, source, etc.
|
| 10 |
-
subset: The name of the RedPajama subset this sample came from (e.g., c4).
|
| 11 |
|
| 12 |
-
Source Data
|
| 13 |
-
This dataset is a direct downstream derivative of RedPajama-Data-1T.
|
| 14 |
|
| 15 |
-
Original Repository: togethercomputer/RedPajama-Data-1T
|
|
|
|
|
|
|
| 16 |
|
| 17 |
-
|
|
|
|
| 18 |
|
| 19 |
-
|
|
|
|
|
|
|
| 20 |
|
| 21 |
-
|
| 22 |
-
This dataset was generated using the Hugging Face datasets library with the following methodology:
|
| 23 |
-
|
| 24 |
-
Direct Streaming: Data was streamed directly from the original source files to avoid downloading the full 5TB dataset.
|
| 25 |
-
|
| 26 |
-
Selection: The first ~2 Billion tokens were selected from the c4 split.
|
| 27 |
-
|
| 28 |
-
Token Counting: Token counts were estimated using the EleutherAI/gpt-neox-20b tokenizer (or a 1 token ≈ 4 characters approximation).
|
| 29 |
-
|
| 30 |
-
Licensing Information
|
| 31 |
Since this is a subset of RedPajama, it inherits the licensing terms of the original data.
|
| 32 |
-
RedPajama-Data-1T is distributed under the licenses of its underlying data sources.
|
| 33 |
-
C4 (Common Crawl): Terms of Use are available here.
|
| 34 |
-
Code/Scripts: The code used to generate this subset is Open Source.
|
| 35 |
|
| 36 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 37 |
If you use this dataset, please cite the original RedPajama paper:
|
|
|
|
|
|
|
| 38 |
@software{together2023redpajama,
|
| 39 |
author = {Together Computer},
|
| 40 |
title = {RedPajama: An Open Source Recipe to Reproduce LLaMA training dataset},
|
| 41 |
month = April,
|
| 42 |
year = 2023,
|
| 43 |
-
url = {https://github.com/togethercomputer/RedPajama-Data}
|
| 44 |
}
|
|
|
|
| 1 |
+
## Dataset Summary
|
| 2 |
+
This dataset is a **2 Billion token subset** of the [RedPajama-Data-1T](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T) dataset created by Together Computer.
|
| 3 |
|
| 4 |
+
It was created to provide a smaller, lightweight version of the massive 1.2 Trillion token dataset for testing, debugging, and small-scale pre-training experiments. This specific subset was streamed primarily from the **C4 (Colossal Clean Crawled Corpus)** partition of the original RedPajama dataset.
|
| 5 |
|
| 6 |
+
## Dataset Structure
|
| 7 |
The dataset retains the original structure of RedPajama. Each row contains:
|
| 8 |
+
- **text**: The actual content (string).
|
| 9 |
+
- **meta**: Metadata dict containing `url`, `timestamp`, `source`, etc.
|
| 10 |
+
- **subset**: The name of the RedPajama subset this sample came from (e.g., `c4`).
|
| 11 |
|
| 12 |
+
## Source Data
|
| 13 |
+
This dataset is a direct downstream derivative of **RedPajama-Data-1T**.
|
| 14 |
|
| 15 |
+
- **Original Repository:** [togethercomputer/RedPajama-Data-1T](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T)
|
| 16 |
+
- **Original Author:** Together Computer
|
| 17 |
+
- **Subset Used:** `c4` (Colossal Clean Crawled Corpus)
|
| 18 |
|
| 19 |
+
## Creation Process
|
| 20 |
+
This dataset was generated using the Hugging Face `datasets` library with the following methodology:
|
| 21 |
|
| 22 |
+
1. **Direct Streaming:** Data was streamed directly from the original source files to avoid downloading the full 5TB dataset.
|
| 23 |
+
2. **Selection:** The first ~2 Billion tokens were selected from the `c4` split.
|
| 24 |
+
3. **Token Counting:** Token counts were estimated using the `EleutherAI/gpt-neox-20b` tokenizer (or a 1 token ≈ 4 characters approximation).
|
| 25 |
|
| 26 |
+
## Licensing Information
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 27 |
Since this is a subset of RedPajama, it inherits the licensing terms of the original data.
|
|
|
|
|
|
|
|
|
|
| 28 |
|
| 29 |
+
- **RedPajama-Data-1T** is distributed under the licenses of its underlying data sources.
|
| 30 |
+
- **C4 (Common Crawl)**: Terms of Use are available [here](https://commoncrawl.org/terms-of-use/).
|
| 31 |
+
- **Code/Scripts**: The code used to generate this subset is Open Source.
|
| 32 |
+
|
| 33 |
+
## Citation
|
| 34 |
If you use this dataset, please cite the original RedPajama paper:
|
| 35 |
+
|
| 36 |
+
```bibtex
|
| 37 |
@software{together2023redpajama,
|
| 38 |
author = {Together Computer},
|
| 39 |
title = {RedPajama: An Open Source Recipe to Reproduce LLaMA training dataset},
|
| 40 |
month = April,
|
| 41 |
year = 2023,
|
| 42 |
+
url = {[https://github.com/togethercomputer/RedPajama-Data](https://github.com/togethercomputer/RedPajama-Data)}
|
| 43 |
}
|