krisbailey's picture
Upload Parquet files and updated README
80dd635 verified
metadata
license: apache-2.0
task_categories:
  - text-generation
language:
  - en
tags:
  - redpajama
  - llm
  - dataset-reproduction
  - redpajama-1b
  - redpajama-subset
  - redpajama-weighted
  - redpajama-sample
  - natural-language-processing
size_categories:
  - 100M<n<1B
pretty_name: RedPajama 1B Weighted Subset

RedPajama-1B-Weighted

A canonical 1 Billion token weighted subset of the RedPajama-Data-1T dataset.

Dataset Description

This is a strict, downsampled version of the RedPajama-10B-Weighted dataset. It maintains the exact domain distributions of the full 1T dataset, resized to a lightweight 1 Billion token footprint.

This dataset is ideal for:

  • Rapid Prototyping: Train small models or debug pipelines in minutes rather than days.
  • Reference Baselines: Use a standard, well-defined subset for comparative benchmarks.
  • Educational Use: Explore the properties of large-scale pretraining data on consumer hardware.

Dataset Details

Motivation

While the 10B subset is manageable, sometimes you need something even faster. A 1 Billion token dataset is the "Goldilocks" size for many initial experiments—large enough to train a statistically significant small language model (e.g., TinyLlama size) but small enough to download and process on a laptop.

We created this by strictly downsampling the 10B dataset to ensure that the distribution remains consistent with the larger parent datasets.

Dataset Creation Process

1. Source Selection

We utilized the RedPajama-10B-Weighted dataset as the source. This parent dataset was already constructed via weighted interleaving of the original RedPajama corpus.

2. Global Shuffling

The 10B dataset was globally shuffled (Seed: 43) to ensure that selecting the first $N$ tokens results in a random, representative sample, rather than a temporal slice.

3. Truncation

We selected the first 1 Billion tokens from the shuffled stream.

4. Verification

We verified that the final subset retains the correct proportional mix of CommonCrawl, C4, GitHub, etc., matching the target distribution.

Composition

Subset Weight Approx. Tokens
CommonCrawl 74.16% ~741.6 M
C4 14.78% ~147.8 M
GitHub 4.98% ~49.8 M
ArXiv 2.36% ~23.6 M
Wikipedia 2.03% ~20.3 M
StackExchange 1.69% ~16.9 M

Usage

from datasets import load_dataset

# Load the 1B weighted subset
ds = load_dataset("krisbailey/RedPajama-1B-Weighted", split="train")

print(ds)

Citation

If you use this dataset, please cite the original RedPajama work:

@software{together2023redpajama,
  author = {Together Computer},
  title = {RedPajama: An Open Source Recipe to Reproduce LLaMA training dataset},
  month = April,
  year = 2023,
  url = {https://github.com/togethercomputer/RedPajama-Data}
}