RedPajama-2B-Sample / README.md
kanahia1's picture
Update README.md
c4fcc92 verified

Dataset Summary

This dataset is a 2 Billion token subset of the RedPajama-Data-1T dataset created by Together Computer.

It was created to provide a smaller, lightweight version of the massive 1.2 Trillion token dataset for testing, debugging, and small-scale pre-training experiments. This specific subset was streamed primarily from the C4 (Colossal Clean Crawled Corpus) partition of the original RedPajama dataset.

Dataset Structure

The dataset retains the original structure of RedPajama. Each row contains:

  • text: The actual content (string).
  • meta: Metadata dict containing url, timestamp, source, etc.
  • subset: The name of the RedPajama subset this sample came from (e.g., c4).

Source Data

This dataset is a direct downstream derivative of RedPajama-Data-1T.

Creation Process

This dataset was generated using the Hugging Face datasets library with the following methodology:

  1. Direct Streaming: Data was streamed directly from the original source files to avoid downloading the full 5TB dataset.
  2. Selection: The first ~2 Billion tokens were selected from the c4 split.
  3. Token Counting: Token counts were estimated using the EleutherAI/gpt-neox-20b tokenizer (or a 1 token ≈ 4 characters approximation).

Licensing Information

Since this is a subset of RedPajama, it inherits the licensing terms of the original data.

  • RedPajama-Data-1T is distributed under the licenses of its underlying data sources.
  • C4 (Common Crawl): Terms of Use are available here.
  • Code/Scripts: The code used to generate this subset is Open Source.

Citation

If you use this dataset, please cite the original RedPajama paper:

@software{together2023redpajama,
  author = {Together Computer},
  title = {RedPajama: An Open Source Recipe to Reproduce LLaMA training dataset},
  month = April,
  year = 2023,
  url = {[https://github.com/togethercomputer/RedPajama-Data](https://github.com/togethercomputer/RedPajama-Data)}
}