FinetextPL-Edu / README.md
kowalikmarcel's picture
Dataset card (#2)
0012b59
metadata
license: odc-by
language:
  - pl
pretty_name: FinetextPL-Edu
size_categories:
  - 100M<n<1B
task_categories:
  - text-generation
tags:
  - text-quality
  - educational
  - polish-nlp
  - pretraining
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/*

FinetextPL-Edu

TL;DR — ~160 million Polish documents from FineWeb2 and FinePDFs, each annotated with a prediction score (1–5) estimating educational value. Filter on prediction >= 2.5 to retain a quality-focused subset while preserving a robust portion of training tokens. Created as part of an engineering thesis on educational corpus curation for Polish LLM pretraining.

Token volume estimates using APT4 tokenizer:

  • FineWeb2 slice: ~109.8B tokens
  • FinePDFs slice: ~37.3B tokens

Quick Start

from datasets import load_dataset

ds = load_dataset("FinetextPL/FinetextPL-Edu", split="train", streaming=True)

# We recommend filtering by scores >= 2.5
edu = ds.filter(lambda x: x["prediction"] >= 2.5)

# You may filter by source
web_only  = ds.filter(lambda x: x["dataset_source"] == "fineweb2")
pdfs_only = ds.filter(lambda x: x["dataset_source"] == "finepdfs")

Dataset Description

FinetextPL-Edu is a large-scale Polish corpus derived from the Polish subsets of FineWeb2 and FinePDFs. The dataset contains approximately 160 million documents, each annotated with a scalar score representing its "educational value". This score was generated by a custom-trained RoBERTa classifier based on PKOBP/polish-roberta-8k, designed to identify content suitable for training high-quality language models.

The primary goal of this dataset is to provide a resource for training Polish language models with an emphasis on factual grounding and reasoning ability. It was created by applying a methodology inspired by the FineWeb-Edu project to the Polish language, addressing the need for systematically filtered, high-quality native corpora.

The core feature of this dataset is the prediction field — a float score reflecting the 1-5 educational annotation rubric. A threshold of score >= 2.5 is the recommended starting point: it retains only the top ~10% of documents by count, but these documents are substantially longer than average and contribute a large share of training tokens.

Data Fields

Field Type Source Description
text string Both Main document content
prediction float Both Educational quality score (~1-5)
dataset_source string Both "FineWeb2" or "FinePDFs"
id string Both Unique document identifier
file_path string Both Path to the source WARC or PDF file
minhash_cluster_size int64 Both Size of the document's MinHash deduplication cluster (useful for custom upsampling strategies)
url string FineWeb2 Source URL
date string FineWeb2 Crawl date from Common Crawl
dump string FineWeb2 Common Crawl dump identifier
offset int64 FinePDFs Byte offset within the source file
full_doc_lid string FinePDFs Language ID of the full document
full_doc_lid_score float FinePDFs Language ID confidence score
is_truncated bool FinePDFs Whether the document was truncated
duplicate_count int64 FinePDFs Number of near-duplicate copies found

Source Data

  1. FineWeb2 (Polish slice): ~150 million documents from the Polish portion of FineWeb2, a filtered version of Common Crawl.
  2. FinePDFs (Polish slice): ~10 million documents from the Polish portion of FinePDFs, contributing formal and structured text from academic, technical, and institutional sources.

Annotations

The dataset uses machine-generated labels from a custom-trained quality classifier.

Scoring Rubric:

Score Category Definition
1 Noise & Commercial Spam, navigation elements, fictional content, strictly commercial text (advertisements)
2 Context-Specific News, corporate descriptions, product reviews, personal opinions — describes topics without explaining underlying principles
3 Instructional Explains general concepts through specific examples or guides; teaches transferable skills
4 Analytical Analysis of historical patterns, scientific concepts, reasoning methods, or social phenomena
5 Foundational Comprehensive explanations of complex topics and fundamental theories, comparable to high-quality textbook material

Annotation Process:

  1. Synthetic dataset generation: [Gemini-2.0-Flash] was used to annotate 301,357 randomly sampled documents via the Google Batch API. A Chain-of-Thought prompt forced the model to reason about whether text explained underlying principles rather than relying on surface-level academic keywords. The teacher model achieved accuracy 0.93 / F1 0.76 (positive class: score ≥ 3) on a 340-document gold-standard validation set.

  2. Label distribution of the synthetic training set (mean score: 1.70; 90th percentile at score 3.0):

    Score Count %
    1 129,036 42.8%
    2 141,722 47.0%
    3 23,162 7.7%
    4 7,419 2.5%
    5 18 <0.01%
  3. Classifier training: PKOBP/polish-roberta-8k was fine-tuned for 2 epochs with a regression head. Only the last 4 encoder layers were unfrozen to preserve general linguistic features. Training used fp16 precision on a single NVIDIA L40 GPU (lr=2e-5, cosine schedule, warmup ratio 0.1, weight decay 0.01). The model achieved F1 = 0.79 on the held-out test set (positive class: score ≥ 2.5).

  4. Large-scale inference: Scoring ran on NVIDIA RTX 4090 GPUs in fp16, with length-sorted batching to minimize padding overhead (~100 GPU-hours total).

Personal and Sensitive Information

The dataset is sourced from public web data (FineWeb2) and publicly available PDFs (FinePDFs). As with any large web corpus, it may contain personal or sensitive information. The filtering process does not explicitly remove such content. Users should handle the data in accordance with applicable privacy regulations.

Pretraining Validation

To confirm the dataset produces better models than unfiltered alternatives, we ran controlled pretraining experiments at two scales. All hyperparameters were kept identical across runs - only the dataset composition varied.

Config Scale Source Quality Filter
Base-FW2 561M FineWeb2 (Polish slice) None — unfiltered baseline
HQ-FW2 561M FineWeb2-HQ + FinePDFs-Edu (80/20) External quality filter
FinetextPL-Edu 561M FineWeb2 + FinePDFs (Polish slice) Score ≥ 2.5 (this dataset)
HQ-FW2 1.8B FineWeb2-HQ + FinePDFs-Edu (80/20) External quality filter
FinetextPL-Edu 1.8B FineWeb2 + FinePDFs (Polish slice) Score ≥ 2.5 (this dataset)

Training on FinetextPL-Edu (score ≥ 2.5) consistently outperforms the unfiltered Base-FW2 baseline, particularly on reasoning and knowledge-retrieval tasks (ARC-Challenge-PL, HellaSwag-PL). Full experimental details and benchmark results will be published in the accompanying paper.

Evaluation used Bits-per-Byte (bpb) as the primary intrinsic metric, alongside a Polish benchmark suite: MMLU-PL, ARC-Challenge-PL, HellaSwag-PL, GSM8K-PL, Belebele-PL, LLMzSzŁ, PES, and TruthfulQA-PL.

561M scale benchmark results

1.8B scale benchmark results

Acknowledgements

We gratefully acknowledge Polish high-performance computing infrastructure PLGrid (HPC Center: ACK Cyfronet AGH) for providing computer facilities and support within computational grant no. PLG/2025/018955.

Citation

This dataset was created as part of an engineering thesis. A formal citation will be provided upon publication. In the meantime, please reference as:

@misc{finetextpl-edu-2025,
  title        = {FinetextPL-Edu: A Polish Educational Corpus for Language Model Pretraining},
  author       = {[Miłosz Poruba, Marcel Kowalik]},
  year         = {2026},
  note         = {Engineering thesis},
  howpublished = {\url{https://huggingface.co/datasets/FinetextPL/FinetextPL-Edu}}
}