dclm_100BT-shuffled / README.md
joelniklaus's picture
joelniklaus HF Staff
Add dataset card
2fa015e verified
metadata
dataset_info:
  features:
    - name: text
      dtype: string
    - name: id
      dtype: string
    - name: url
      dtype: string
    - name: language
      dtype: string
    - name: language_score
      dtype: float64
    - name: fasttext_score
      dtype: float64
    - name: dataset
      dtype: string
  splits:
    - name: train
      num_examples: 89269902
license: odc-by
language:
  - en
size_categories:
  - 10M<n<100M
tags:
  - pretraining
  - smol-data
pretty_name: DCLM 100BT (Shuffled)

DCLM 100BT (Shuffled)

A globally shuffled version of HuggingFaceFW/dclm_100BT.

Part of the Smol-Data collection — tried and tested mixes for strong pretraining.

Dataset Description

This dataset contains the same ~100B tokens as dclm_100BT but with all documents globally shuffled (seed=42). Use this version when you need randomized document ordering for pretraining.

How It Was Created

The unshuffled dataset was loaded into memory, shuffled with dataset.shuffle(seed=42), and re-uploaded with 100 shards. See the smol_data.py script for details.

Usage

from datasets import load_dataset

ds = load_dataset("HuggingFaceFW/dclm_100BT-shuffled", split="train", streaming=True)
for sample in ds:
    print(sample["text"][:200])
    break

Citation

@misc{niklaus2026smoldata,
      title={SmolData},
      author={Joel Niklaus and Hynek Kydl{\'\i}{\v{c}}ek},
      year={2026},
      publisher={Hugging Face},
      journal={Hugging Face repository},
      howpublished={\url{https://huggingface.co/collections/HuggingFaceFW/smol-data}}
}