The dataset viewer is not available for this split.
Error code: TooBigContentError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
FineWeb-Edu-Dedup (Globally Shuffled)
A uniformly shuffled version of the FineWeb-Edu-Dedup subset from SmolLM-Corpus by HuggingFace.
Source Data
This dataset is derived from HuggingFaceTB/smollm-corpus, specifically the fineweb-edu-dedup subset. That subset is itself derived from FineWeb-Edu, a filtered and deduplicated extract of Common Crawl selected for educational content quality.
| Property | Value |
|---|---|
| Source dataset | HuggingFaceTB/smollm-corpus |
| Source subset | fineweb-edu-dedup |
| Upstream origin | FineWeb-Edu via Common Crawl |
| Source files | 234 parquet files |
| Total rows | 190,168,005 |
| Output files | 381 parquet files (~499K rows each) |
| Shuffle seed | 42 |
| Compression | zstd |
The text content is byte-identical to the source. No filtering, deduplication, or transformation has been applied beyond reordering rows. Each output row includes a _source_index column recording the row's original position in the source dataset for full traceability.
Motivation
The upstream FineWeb-Edu-Dedup parquet files are organized by Common Crawl dump, producing temporal and topical clustering: consecutive rows tend to come from the same crawl, the same domains, and similar subject matter. When pretokenized training shards are built by reading these files sequentially, this clustering propagates into the training data, reducing gradient diversity during pretraining.
This dataset eliminates that ordering bias by applying a provably uniform global shuffle to all 190 million rows.
Schema
| Column | Type | Description |
|---|---|---|
text |
large_string |
Document text, byte-identical to the source |
_source_index |
int64 |
Original row index in the source dataset (0-indexed across all 234 source files concatenated in sorted filename order) |
Methodology
Uniform Permutation
A single permutation of all N = 190,168,005 row indices is generated using the Fisher-Yates shuffle (also known as the Knuth shuffle). Fisher-Yates is the standard algorithm for generating uniformly random permutations: it produces each of the N! possible orderings with exactly equal probability 1/N!.
The permutation assigns every source row a unique output position. From this, each row's destination output file and position within that file are derived deterministically.
Pseudorandom Number Generator
The permutation is generated using NumPy's PCG64 (Permuted Congruential Generator) with a 128-bit state and period of 2^128. To prevent correlation between runs with sequential seeds, the integer seed is hashed through BLAKE2b before being used to initialize the generator. The output is fully deterministic: the same seed always produces the same permutation.
Two-Pass Shuffle
The shuffle is executed in two sequential-I/O passes to avoid random access across the full dataset:
Pass 1 (Scatter): The 234 source parquet files are read sequentially. For each row, the precomputed permutation determines which output bucket it belongs to. Rows are buffered by bucket and flushed to intermediate shard files on disk when buffers fill. All I/O is sequential. Multiple workers process source files in parallel, each writing to its own shard files.
Pass 2 (Gather): For each of the 381 output buckets, all shard files are read, concatenated, sorted into the permutation-defined order, and written as the final output parquet. Each bucket is independent, making this embarrassingly parallel.
This approach requires no random access across the full dataset and uses bounded memory per worker regardless of dataset size.
Statistical Verification
The sampling logic is isolated in a pure module with no I/O or side effects, tested to statistical certainty:
- Positional uniformity: Chi-squared tests confirm each element is equally likely at each output position (n=12, 600K trials, alpha=0.001).
- Adjacency uniformity: Chi-squared tests confirm each element is equally likely to follow any other element (n=12, 600K trials, alpha=0.001).
- Full permutation uniformity: For n=6, all 720 possible permutations appear with equal frequency over 3M trials (chi-squared, alpha=0.001).
- Seed independence: Spearman rank correlations between permutations from 10K consecutive seed pairs are verified to be near zero.
Bucket Sizing
The 190M rows are distributed across 381 output files (~499K rows each). Bucket size controls the statistical representativeness of each output file.
For a bucket of m rows, a category with global frequency p has relative error 1/sqrt(m*p) in its within-bucket representation. At ~499K rows per bucket:
| Category frequency p | Expected count per bucket | Relative error |
|---|---|---|
| 10% | ~49,900 | 0.45% |
| 1% | ~4,990 | 1.4% |
| 0.1% | ~499 | 4.5% |
| 0.04% | ~200 | 10% |
Categories as rare as 0.04% of the dataset have at most ~10% relative error in any single bucket. This means each output file is approximately representative of the global distribution — the file-level ordering is approximately exchangeable.
Output Verification
After the shuffle completes, automated checks confirm:
- Row count: Total rows across all 381 output files equals 190,168,005.
- Permutation validity: All
_source_indexvalues form a valid permutation of [0, 190168005) with no duplicates or gaps.
Known Limitations
HuggingFace Data Studio: The parquet files were written without a page index, which prevents the HuggingFace Data Studio from serving random row previews without loading entire row groups. This does not affect programmatic consumption (PyArrow, pandas, DuckDB, etc.) — only the web-based Data Studio preview. A future re-serialization with write_page_index=True and smaller row-group sizes would resolve this.
License
This dataset inherits the ODC-BY 1.0 license from FineWeb via SmolLM-Corpus.
- Downloads last month
- 121