Dataset Viewer

The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.

FineWeb-Edu-Dedup Shuffled Pretokenized

Pretokenized training shards built from a globally shuffled version of FineWeb-Edu-Dedup. Ready for direct consumption by the Daisy pretraining loop.

Summary

Property Value
Total tokens 181,465,257,766 (~181.5B)
Train tokens ~180.5B
Val tokens 1,000,000,000 (1B)
Train shards 1,994
Val shards 10
Tokens per shard 100,000,000 (full shards); last shard per worker may be partial
Documents (train) 180,185,493
Documents (val) 992,927
Tokenizer jonathanmiddleton/daisy (49,152 vocab, BPE)
Token dtype uint16
Shard format v3 (magic=20260114, version=3)
EOS token ID 49131

Directory Structure

train/
  000000.bin
  000001.bin
  ...
  001993.bin
val/
  000000.bin
  000001.bin
  ...
  000009.bin

Each .bin file contains a 1024-byte header followed by a flat array of uint16 token IDs.

Shard Format

Each shard file has a fixed 1024-byte header (256 int32 words) followed by the token payload:

Header word Field Value
0 magic 20260114
1 version 3
2 num_tokens number of tokens in this shard
3 tokenizer_crc CRC32 of tokenizer name (stored as uint32 in int32 slot)
4 vocab_size 49152
5 eos_id 49131
6 dtype_bits 16

The token stream is a concatenation of documents separated by EOS tokens:

[EOS] [doc1_token1] [doc1_token2] ... [EOS] [doc2_token1] ...

Every document begins with an EOS token (ID 49131). Documents may span shard boundaries: a document that doesn't fit entirely in one shard continues at the start of the next shard within the same worker's output. The training data loader treats all shards as a single continuous token stream.

Provenance

This dataset was produced by a two-stage pipeline:

Stage 1: Global Shuffle (parquet)

The 190,168,005 rows of HuggingFaceTB/smollm-corpus (fineweb-edu-dedup subset) were globally shuffled using a Fisher-Yates permutation with BLAKE2b-seeded PCG64 PRNG (seed=42). The shuffled parquet is published separately at JonathanMiddleton/fineweb-edu-dedup-shuffled.

The shuffle eliminates temporal and topical clustering from the upstream Common Crawl dump ordering, improving gradient diversity during pretraining.

Stage 2: Pretokenization (this dataset)

The shuffled parquet was tokenized using the jonathanmiddleton/daisy tokenizer (49,152 vocab BPE) and written as uint16 binary shards.

  • Train/val split: The first 20 of 381 shuffled parquet files (5%) were reserved for validation. The remaining 361 files were used for training.
  • Train shards: 190 parallel workers drained all 361 train parquet files, producing 1,994 shards (1,803 full shards of 100M tokens + 191 partial final shards).
  • Val shards: 1 worker tokenized the 20 val parquet files, capped at 10 shards (1B tokens). Not all val documents were tokenized due to the shard cap.

Verification

Post-build validation confirmed:

  • All shard headers are valid (magic, version, tokenizer CRC, payload size).
  • Sequential shard naming with no gaps.
  • Train EOS token count (180,185,493) matches the source row count for the 361 train parquet files (180,185,425 rows). The +68 difference is within tolerance (0.00004%), likely from documents whose tokenized content incidentally contains the EOS token ID.

Usage

Download

python -m data.download_dataset fineweb-edu-shuffled

This downloads to data/fineweb-edu-shuffled/train/ and data/fineweb-edu-shuffled/val/.

Training Configuration

In a Daisy training YAML config:

train_shards:
  - type: "fineweb_edu_shuffled"
    path: "data/fineweb-edu-shuffled/train"
    sequence_length: 65536

val_shards:
  - type: "fineweb_edu_shuffled"
    path: "data/fineweb-edu-shuffled/val"
    target_tokens: 1_000_000
    sequence_length: 65536

The data loader globs *.bin from the directory and reads shards sequentially.

Shard Range Selection

To use a subset of shards (e.g., for multi-stage training that avoids data reuse):

path: "data/fineweb-edu-shuffled/train[000500:001000]"

This selects shards 000500.bin through 001000.bin (inclusive), using the range filter supported by the Daisy data loader.

License

This dataset inherits the ODC-BY 1.0 license from FineWeb via SmolLM-Corpus.

Downloads last month
172