krisbailey's picture
Update README.md
52e132b verified
metadata
license: odc-by
task_categories:
  - text-generation
language:
  - en
tags:
  - redpajama
  - v2
  - 1B
  - parquet
  - redpajama
  - reference-reproduction
  - benchmark-subset
  - open-pretraining-data
  - reproducible-dataset
  - data-slicing
size_categories:
  - 1B<n<10B

RedPajama-Data-V2 1B

Dataset Description

This is a 1.01 Billion token subset of the togethercomputer/RedPajama-Data-V2 dataset (specifically derived from the sample-10B config). It was created by randomly sampling the source data.

Motivation

RedPajama V2 is a state-of-the-art web dataset with rich quality signals. This 1B token subset allows for rapid testing of these quality signals or other filtering experiments without needing to process the full multi-trillion token dataset.

Dataset Details

Usage

from datasets import load_dataset

ds = load_dataset("krisbailey/RedPajama-Data-V2-1B", split="train")
print(ds[0])

Subsets & Slicing

Since this dataset was randomly shuffled during creation, you can safely slice it to get smaller, representative datasets (e.g., for scaling laws experiments) without needing to download the full dataset.

# 100M Token Subset (approx 10%)
ds_100m = load_dataset("krisbailey/RedPajama-Data-V2-1B", split="train[:10%]")

# 500M Token Subset (approx 50%)
ds_500m = load_dataset("krisbailey/RedPajama-Data-V2-1B", split="train[:50%]")

Citation

@article{together2023redpajama,
  title={RedPajama: An Open Source Recipe to Reproduce LLaMA training dataset},
  author={Together Computer},
  journal={https://github.com/togethercomputer/RedPajama-Data},
  year={2023}
}

Data Mixture

Subset Tokens % of Total
redpajama-v2-sample 1,005,000,066 100.00%
Total 1,005,000,066 100.00%