Datasets:
language:
- en
license: apache-2.0
task_categories:
- text-generation
pretty_name: StenCore — FinePDFs-Edu Curated
authors:
- StentorLabs
size_categories:
- 100K<n<1M
source_datasets:
- HuggingFaceFW/finepdfs-edu
tags:
- education
- quality-filtered
- deduplicated
- perplexity-filtered
- kenlm
- minhash
- domain-reweighted
- llm-pretraining
- english
- pii-filtered
configs:
- config_name: default
data_files:
- split: train
path: data/*.parquet
StenCore — FinePDFs-Edu Curated
By StentorLabs
StenCore is StentorLabs' first dataset release — a quality-first resource built from the ground up for training language models. Derived from HuggingFaceFW/finepdfs-edu, every document passed a 14-stage automated curation pipeline including heuristic filtering, language ID, PII redaction, toxicity screening, eval decontamination, multi-strategy deduplication, KenLM + neural perplexity scoring, and domain reweighting.
⚠️ Privacy & Copyright Notice: This dataset was produced by automated pipelines that include regex-based PII redaction and heuristic filters. Automated redaction may miss personal data (false negatives) and may over-redact (false positives). No claim of complete PII removal is made. Do not use this dataset in systems that surface individual personal data. Review and clear upstream site-level licenses before redistributing or deploying models trained on this data. For takedown or privacy requests, open an issue or contact stentorlabs@gmail.com with the affected doc identifier and evidence.
Quick Stats
| Property | Value |
|---|---|
| Pipeline | StenCore v2026.03 |
| Source | HuggingFaceFW/finepdfs-edu (3 Parquet files, ~8.9 GB) |
| Docs in → out | 584,000 → 149,000 (~37.78% keep rate) |
| Size (uncompressed) | 447,088,827 bytes (~447.09 MB) |
| Avg doc size | |
| Synthetic docs | 0 (100% human-authored) |
| Language | English (en) |
| Platform | Kaggle CPU |
| Total runtime | ~9.47 hours wall / ~12+ hours CPU |
Usage
from datasets import load_dataset
ds = load_dataset("StentorLabs/stencore")
# Filter to highest-quality documents
top = ds["train"].filter(lambda r: r["cqf_score"] >= 0.5)
print(len(top))
print(top[0]["text"][:500])
Intended & Disallowed Uses
✅ Intended uses
- LLM pre-training and fine-tuning on English educational text
- Quality filtering and data curation research
- Domain reweighting and curriculum learning experiments
🚫 Disallowed uses
- Systems that surface or process personal information
- Redistribution without reviewing upstream licensing (Apache 2.0 requires attribution; check site-level licenses for individual documents)
- Any production deployment without independent legal and privacy review
Pipeline Summary
StenCore (curate_2026 mode) is a fully automated curation pipeline (v2026.03) running 14 sequential stages from raw Parquet to a publish-ready HuggingFace dataset.
Every document in this dataset passed all of the following:
- Columnar pre-filter — DuckDB SQL-level filter over raw Parquet
- Adaptive quality thresholds — learned from the data, not hand-tuned (per-register: formal / conversational / technical)
- Language ID gate — fastText or stopword/script heuristic (
enonly) - Heuristic quality filter — word count, alpha ratio, stopword hits, punctuation ratio, repetition ratios, avg word length
- PII redaction — regex detection of emails, phones, SSNs, IPs, card numbers, IBANs, API keys
- Toxicity screening — lexical axis scoring, drop policy
- Eval decontamination — exact, 8-gram overlap, char 3-gram, SimHash checks against benchmark sets
- Exact deduplication — SHA-1 fingerprint (in-memory)
- MinHash near-dedup — token shingle LSH (autotuned to 25% sample rate)
- CQF quality gate — top 85% by quality score (threshold: CQF ≥ 0.4167)
- KenLM + neural perplexity gate —
edugp/kenlmWikipedia model + SmolLM2-135M (excess mode, ref mean: 916.24) - Cluster rehydration — restores high-quality representatives from dedup-dropped clusters
- Domain reweighting — per-domain weights learned from proxy model evaluation
- Mix optimization + final write — domain-quota sampling, curriculum ordering, shard + merkle output
📋 Full Stage-by-Stage Breakdown
Pre-Stage: Environment & Model Setup
KenLM Scorer
- Repo:
edugp/kenlm| Corpus:wikipedia/en - Snapshot:
3fbe35c83b1a39f420a345b7c96a186c8030d834 - Mode:
first_pass— KenLM prefilters, SmolLM2 rescores the subset
Neural Reference LM
- Model:
HuggingFaceTB/SmolLM2-135M(torchaoint8_weight_only) torch.compile: Disabled (CPU stability)
Effective runtime profile:
light_mode=False input_cap=400 bootstrap_docs=400
stage_timeout_s=90 stream_chunk=32768 ppl_batch=192
ppl_workers=1 prefix_tokens=256 kenlm_mode=first_pass
12h target profile:
hf_input_target_gb=7.50 hf_input_max_files=64
cqf_keep_web=0.85 prior_keep=0.90 web_ratio=0.95
strict_ratio=False dedup=memory autotune_grace=10000
Stage 1: resolve_web_inputs
Wall: 53.47s | CPU: 143.36s | Ratio: 2.68×
DuckDB columnar prefilter over 3 Parquet files → stage_columnar_prefilter.parquet. High CPU/wall ratio confirms parallel DuckDB execution.
Stage 2: adapt_multilingual_profiles
Wall: 25.48s | CPU: 25.39s | Ratio: 1.00×
Draws a pilot sample and learns per-register quality threshold parameters from the data. Three registers profiled: en:formal (1,854 samples), en:conversational (2,993 samples), en:technical. Single-threaded statistical pass.
Stage 3: optimize_threshold_profiles
Wall: 86.79s | CPU: 138.40s | Ratio: 1.59×
Grid search over code/math profile scales, ranked by proxy quality metric. Selects best-performing threshold configuration.
Stage 4: cqf_seed_verification_loop
Wall: 0.001s | CPU: 0.001s
Retrains CQF fastText at multiple thresholds, picks best by proxy quality. Completed near-instantly — seed was valid, no remediation needed.
Stage 5: stage_web_candidate_pass
Wall: 27,162.57s (7.5h) | CPU: 42,229.32s | Ratio: 1.55×
Main filtering stage. Sequential gates per document:
- Quick prefilter (min words / alpha)
- HF-specific input filters (English subset, EDU v2, strip code fences)
- Source policy (host allow/deny, ToS risk, license allowlist)
- Line cleaner (boilerplate, nav, HTML artifacts, duplicate lines)
- Domain routing (code / math / prose)
- Language ID gate (+ English noise guard, code bypass)
- Domain-aware heuristic filter (adaptive thresholds from Stage 2/3)
- PII redaction
- Toxicity screening
- Eval decontamination (exact, 8-gram, char 3-gram, SimHash)
- CQF scoring
- Exact dedup (SHA-1, in-memory)
- MinHash near-dedup (autotuned)
- Semantic dedup (autotuned to
none)
Kept: 220,635 / 584,000 (37.78%). Dropped records to semantic dup pool for potential rehydration.
Stage 6 (observed): mid_proxy_stage3
Wall: 115.50s | CPU: 160.80s | Ratio: 1.39×
Intermediate proxy scoring pass over quality-filtered candidates.
Stage 7: stage_web_quality_and_perplexity
Wall: 300.05s | CPU: 282.49s | Ratio: 0.94×
- CQF threshold gate — keeps top 85% (CQF ≥ 0.4167)
- Multi-objective property minima — per-property floor checks
- Prior noise gate — filters statistical outliers vs. quality prior
- Hybrid disagreement trigger — routes CQF/secondary disagreements to full neural perplexity
- KenLM + SmolLM2 perplexity gate — excess mode, ref mean 916.24
22,450 documents scored. Avg perplexity: 902.02 (stored as excess perplexity — relative to the 916.24 calibrated mean, not absolute).
Stage 8: rehydrate_clusters
Wall: 40.42s | CPU: 40.19s | Ratio: 0.99×
Reads the semantic dup pool. Re-adds top-quality cluster representatives (ranked by FineWeb2-like weighted formula). Optional MMR diversity selection. Single-threaded.
Stage 9 (observed): mid_proxy_stage5
Wall: 102.92s | CPU: 143.10s | Ratio: 1.39×
Second intermediate proxy scoring pass after cluster rehydration.
Proxy Eval & Domain Reweighting (~60 min observed gap)
Per-domain proxy scores aggregated → per-domain reweighting coefficients learned. Hundreds of source domains reweighted. Examples:
| Domain | Weight |
|---|---|
apps.oregonlegislature.gov |
1.0811 |
fcaresources.com |
1.0159 |
ymcawnc.org |
0.9999 |
www2.cs.arizona.edu |
0.9585 |
echalk-slate-prod.s3.amazonaws.com |
0.8882 |
Stage 9: build_synthetic_pool
Wall: 0.003s — No teacher LLM configured. Pool: 0 documents.
Stage 10: stage_synthetic_filter
Wall: 0.002s — No-op (empty pool).
Stage 11: mix_optimization
Wall: 175.34s | CPU: 214.87s | Ratio: 1.23×
Proxy-evaluated search over web/synth mixing ratios. Applies domain weights from proxy eval. Produces final document selection plan.
Stage 12: final_mix_and_write
Wall: 2,311s (38.5 min)
Domain quota enforcement → domain-weighted acceptance sampling → final exact dedup → optional curriculum ordering (easy→hard by CQF/perplexity) → streaming JSONL + Parquet write → SHA-256 + merkle root → optional deterministic sharding.
| Docs Written | Bytes | Avg Bytes/Doc |
|---|---|---|
| 1,000 | 2,783,132 | 2,783 |
| 149,000 | 447,088,827 | 2,999 |
Stages 13–14: proxy_eval + hf_push
Final proxy metric gate enforcement → auto-upload to HuggingFace Hub.
Adaptive Quality Thresholds
📐 Full learned threshold values (en:formal and en:conversational)
These parameters were learned from the data in Stage 2 and optimized in Stage 3. All values are exact as logged.
en:formal — 1,854 bootstrap samples
| Parameter | Value |
|---|---|
min_words |
30 |
max_words |
2,000 |
min_stopwords |
2 |
max_line_punct_ratio |
0.1111111111111111 |
max_word_repeat_3gram_ratio |
0.30466436237947997 |
max_char_repeat_5gram_ratio |
0.35 |
min_alpha_ratio |
0.5967419247419248 |
min_avg_word_len |
4.157040378006873 |
max_avg_word_len |
6.216977322149734 |
en:conversational — 2,993 bootstrap samples
| Parameter | Value |
|---|---|
min_words |
18 |
max_words |
2,000 |
min_stopwords |
2 |
max_line_punct_ratio |
0.10838961038961101 |
max_word_repeat_3gram_ratio |
0.19476069102237326 |
max_char_repeat_5gram_ratio |
0.35 |
min_alpha_ratio |
0.6542321503584156 |
min_avg_word_len |
4.098954647914038 |
max_avg_word_len |
6.0 |
en:technical
Separate profile applied; full values truncated in logs. Expected to have higher tolerance for non-alphabetic characters (equations, code, symbols) and relaxed stopword requirements.
Deduplication Autotune
⚙️ Runtime autotune event log
The autotune system fires after 10,000 documents (grace period). All 6 events occurred in a 5,000-document window — the system converges aggressively.
| Event | Docs Seen | Throughput | Action |
|---|---|---|---|
| Start | 0 | — | sem=hybrid, embed=0.0200, minhash=1.000 |
| #1 | 10,000 | 21.81 docs/s | embed_sample → 0.0050 |
| #2 | 11,000 | 21.92 docs/s | embed_sample → 0.0020 |
| #3 | 12,000 | 22.04 docs/s | semantic_mode → minhash |
| #4 | 13,000 | 22.13 docs/s | minhash_sample → 0.500 |
| #5 | 14,000 | 22.31 docs/s | minhash_sample → 0.250 |
| #6 | 15,000 | 22.57 docs/s | semantic_mode → none |
| Stable | 584,000 | 21.5 docs/s | Final: sem=none, embed=0.0020, minhash=0.250 |
Throughput gain: ~1.8× (12.3 → 22 docs/s). MinHash at 25% sampling will miss some near-duplicate pairs — a deliberate throughput/precision tradeoff accepted by the autotune system based on observed duplicate density.
Perplexity Scoring
📊 KenLM + SmolLM2 scoring details
Configuration
- KenLM model:
edugp/kenlm,wikipedia/en, snapshot3fbe35c83b1a39f420a345b7c96a186c8030d834 - Neural LM:
HuggingFaceTB/SmolLM2-135M(torchao int8_weight_only) - Mode:
first_pass— KenLM prefilters all docs; SmolLM2 rescores near-boundary subset - Scoring mode:
excess—score = raw_perplexity - reference_mean - Reference mean: 916.2404 (calibrated fresh on 256 bootstrap docs,
fit_new)
Scoring progress
| Docs Scored | Last Doc PPL | Running Avg PPL |
|---|---|---|
| 50 | 7,442.60 | 218.38 |
| 22,450 | 8,182.66 | 902.02 |
High individual perplexity values (>7,000) are expected for math-heavy or notation-rich educational text under a Wikipedia-trained model. The excess mode partially normalizes this. Running mean converges to ~902 across the full scored set.
Runtime & Timing
⏱️ Full per-stage timing table
| Stage | Wall Time | CPU Time | CPU/Wall |
|---|---|---|---|
| Model/env setup | ~167s | — | — |
resolve_web_inputs |
53.47s | 143.36s | 2.68× |
adapt_multilingual_profiles |
25.48s | 25.39s | 1.00× |
optimize_threshold_profiles |
86.79s | 138.40s | 1.59× |
cqf_seed_verification_loop |
0.001s | 0.001s | 1.00× |
stage_web_candidate_pass |
27,162.57s | 42,229.32s | 1.55× |
stage_web_quality_and_perplexity |
300.05s | 282.49s | 0.94× |
mid_proxy_stage3 (observed) |
115.50s | 160.80s | 1.39× |
rehydrate_clusters |
40.42s | 40.19s | 0.99× |
mid_proxy_stage5 (observed) |
102.92s | 143.10s | 1.39× |
| Proxy / domain reweighting | ~3,590s | — | — |
build_synthetic_pool |
0.003s | 0.005s | — |
stage_synthetic_filter |
0.002s | 0.002s | — |
mix_optimization |
175.34s | 214.87s | 1.23× |
final_mix_and_write |
~2,311s | — | — |
| TOTAL | ~34,088s | ~43,377s+ | ~1.27× |
Candidate pass throughput: 12.3 docs/s (initial) → ~22 docs/s (post-autotune, **1.8× gain**).
Limitations
- PDF extraction artifacts — OCR artifacts, broken equations, and malformed tables may be present despite filtering.
- Residual PII — Automated regex redaction does not guarantee complete PII removal. Do not use for systems that surface personal information.
- Copyright — Source PDFs may carry individual site-level licenses. Apache 2.0 requires attribution; verify upstream licensing for your use case.
- KenLM Wikipedia bias — Math-heavy or highly technical documents may be underrepresented due to high perplexity under a Wikipedia-trained model.
- ~62% rejection rate — Some valid educational content may have been dropped due to heuristic threshold mismatch (e.g., table-heavy or equation-dense formatting).
- English only — Pipeline profiled
en:formal,en:conversational, anden:technicalregisters only. - No synthetic data — This run did not use the synthetic generation system. Dataset is 100% source text.
- MinHash 25% sampling — Post-autotune dedup will miss some near-duplicate pairs.
Licensing & Citation
Released under Apache 2.0. Attribution required. Derived from HuggingFaceFW/finepdfs-edu — review upstream licensing before use.
@dataset{stencore_finepdfs_edu_curated,
title = {StenCore: FinePDFs-Edu Curated},
author = {StentorLabs},
year = {2026},
note = {StentorLabs' first dataset. StenCore pipeline v2026.03.
584k docs in, 149k out. Adaptive heuristics, PII redaction,
toxicity/decontam gates, MinHash + KenLM + neural perplexity,
CQF scoring, proxy domain reweighting.},
howpublished = {\url{https://huggingface.co/datasets/StentorLabs/stencore}}
}
@dataset{fineweb_finepdfs_edu,
author = {HuggingFace FineWeb Team},
title = {FinePDFs-Edu},
howpublished = {\url{https://huggingface.co/datasets/HuggingFaceFW/finepdfs-edu}}
}
Contact: StentorLabs@gmail.com — for takedown requests, privacy concerns, or feedback.
Made with ❤️ by StentorLabs