metadata
license: odc-by
task_categories:
- text-generation
language:
- en
tags:
- falcon
- 100M
- parquet
- web-refined
- text-generation
- clean-web-corpus
- llm-pretrain
- domain-agnostic
- sentence-quality-filtered
- huggingface-refinedweb
size_categories:
- 100M<n<1B
falcon-refinedweb-100M
Dataset Description
This is a 100.0 Million token subset of krisbailey/falcon-refinedweb-1B, which is a subset of tiiuae/falcon-refinedweb.
Motivation
100M tokens is a standard size for:
- CI/CD Pipelines: Fast enough to download and train for unit tests.
- Debugging: Verifying training loops without waiting for hours.
- Scaling Laws: The first step in a logarithmic scaling series (100M -> 1B -> 10B).
Dataset Details
- Total Tokens: 99,999,978
- Source: krisbailey/falcon-refinedweb-1B
- Structure: First ~10% of the randomized 1B dataset.
- Format: Parquet (Snappy compression) - Single File
- Producer: Kris Bailey (kris@krisbailey.com)
Usage
from datasets import load_dataset
ds = load_dataset("krisbailey/falcon-refinedweb-100M", split="train")
print(ds[0])
Citation
@article{penedo2023refinedweb,
title={The RefinedWeb Dataset for Falcon LLM: Outperforming Curated Corpora with Web Data, and Web Data Only},
author={Penedo, Guilherme and Malartic, Quentin and Hesslow, Daniel and Cojocaru, Ruxandra and Cappelli, Alessandro and Alobeidli, Hamza and Pannier, Baptiste and Almazrouei, Ebtesam and Launay, Julien},
journal={arXiv preprint arXiv:2306.01116},
year={2023}
}