fineweb_100BT / README.md
joelniklaus's picture
joelniklaus HF Staff
Add dataset card
2638481 verified
---
dataset_info:
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: date
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: dataset
dtype: string
splits:
- name: train
num_examples: 160677091
license: odc-by
language:
- en
size_categories:
- 100M<n<1B
tags:
- pretraining
- smol-data
pretty_name: FineWeb 100BT
---
# FineWeb 100BT
A ~100 billion token English subset of [FineWeb](https://huggingface.co/datasets/HuggingFaceFW/fineweb), created for efficient pretraining experiments.
Part of the [Smol-Data](https://huggingface.co/collections/HuggingFaceFW/smol-data) collection — tried and tested mixes for strong pretraining.
## Dataset Description
This dataset was created by randomly sampling from the full FineWeb dataset (~16.9T tokens) to produce a ~100B token subset. Sampling was performed with a fixed seed (42) and a slight 1.05× oversampling factor to account for variance.
A pre-shuffled version is available at [HuggingFaceFW/fineweb_100BT-shuffled](https://huggingface.co/datasets/HuggingFaceFW/fineweb_100BT-shuffled).
## How It Was Created
The dataset was generated using [datatrove](https://github.com/huggingface/datatrove) with the [smol_data.py](https://github.com/huggingface/datatrove/blob/main/examples/smol_data.py) script. The pipeline reads from the source dataset in streaming mode, applies a `SamplerFilter` to downsample, and writes the result back to the Hugging Face Hub.
## Usage
```python
from datasets import load_dataset
ds = load_dataset("HuggingFaceFW/fineweb_100BT", split="train", streaming=True)
for sample in ds:
print(sample["text"][:200])
break
```
## Citation
```bibtex
@misc{niklaus2026smoldata,
title={SmolData},
author={Joel Niklaus and Hynek Kydl{\'\i}{\v{c}}ek},
year={2026},
publisher={Hugging Face},
journal={Hugging Face repository},
howpublished={\url{https://huggingface.co/collections/HuggingFaceFW/smol-data}}
}
```