kowalikmarcel commited on
Commit
0012b59
·
1 Parent(s): 5536f6c
Files changed (3) hide show
  1. README.md +145 -17
  2. assets/1.5B-models.png +3 -0
  3. assets/500M-models.png +3 -0
README.md CHANGED
@@ -2,10 +2,16 @@
2
  license: odc-by
3
  language:
4
  - pl
 
5
  size_categories:
6
  - 100M<n<1B
7
  task_categories:
8
  - text-generation
 
 
 
 
 
9
 
10
  configs:
11
  - config_name: default
@@ -14,21 +20,143 @@ configs:
14
  path: data/*
15
 
16
  ---
17
- @misc{kydlicek2025finepdfs,
18
- title={FinePDFs},
19
- author={Hynek Kydl{\'\i}{\v{c}}ek and Guilherme Penedo and Leandro von Werra},
20
- year={2025},
21
- publisher = {Hugging Face},
22
- journal = {Hugging Face repository},
23
- howpublished = {\url{https://huggingface.co/datasets/HuggingFaceFW/finepdfs_edu}}
24
- }
25
 
26
- @misc{penedo2025fineweb2pipelinescale,
27
- title={FineWeb2: One Pipeline to Scale Them All -- Adapting Pre-Training Data Processing to Every Language},
28
- author={Guilherme Penedo and Hynek Kydlíček and Vinko Sabolčec and Bettina Messmer and Negar Foroutan and Amir Hossein Kargaran and Colin Raffel and Martin Jaggi and Leandro Von Werra and Thomas Wolf},
29
- year={2025},
30
- eprint={2506.20920},
31
- archivePrefix={arXiv},
32
- primaryClass={cs.CL},
33
- url={https://arxiv.org/abs/2506.20920},
34
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  license: odc-by
3
  language:
4
  - pl
5
+ pretty_name: FinetextPL-Edu
6
  size_categories:
7
  - 100M<n<1B
8
  task_categories:
9
  - text-generation
10
+ tags:
11
+ - text-quality
12
+ - educational
13
+ - polish-nlp
14
+ - pretraining
15
 
16
  configs:
17
  - config_name: default
 
20
  path: data/*
21
 
22
  ---
 
 
 
 
 
 
 
 
23
 
24
+ # FinetextPL-Edu
25
+
26
+ > **TL;DR** ~160 million Polish documents from FineWeb2 and FinePDFs, each annotated with a `prediction` score (1–5) estimating educational value. Filter on `prediction >= 2.5` to retain a quality-focused subset while preserving a robust portion of training tokens. Created as part of an engineering thesis on educational corpus curation for Polish LLM pretraining.
27
+
28
+ Token volume estimates using APT4 tokenizer:
29
+ - FineWeb2 slice: ~109.8B tokens
30
+ - FinePDFs slice: ~37.3B tokens
31
+
32
+
33
+ ## Quick Start
34
+
35
+ ```python
36
+ from datasets import load_dataset
37
+
38
+ ds = load_dataset("FinetextPL/FinetextPL-Edu", split="train", streaming=True)
39
+
40
+ # We recommend filtering by scores >= 2.5
41
+ edu = ds.filter(lambda x: x["prediction"] >= 2.5)
42
+
43
+ # You may filter by source
44
+ web_only = ds.filter(lambda x: x["dataset_source"] == "fineweb2")
45
+ pdfs_only = ds.filter(lambda x: x["dataset_source"] == "finepdfs")
46
+ ```
47
+
48
+ ---
49
+
50
+ ## Dataset Description
51
+
52
+ **FinetextPL-Edu** is a large-scale Polish corpus derived from the Polish subsets of FineWeb2 and FinePDFs. The dataset contains approximately 160 million documents, each annotated with a scalar score representing its "educational value". This score was generated by a custom-trained RoBERTa classifier based on [PKOBP/polish-roberta-8k](https://huggingface.co/PKOBP/polish-roberta-8k), designed to identify content suitable for training high-quality language models.
53
+
54
+ The primary goal of this dataset is to provide a resource for training Polish language models with an emphasis on factual grounding and reasoning ability. It was created by applying a methodology inspired by the [FineWeb-Edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu) project to the Polish language, addressing the need for systematically filtered, high-quality native corpora.
55
+
56
+ The core feature of this dataset is the `prediction` field — a float score reflecting the 1-5 educational annotation rubric. A threshold of `score >= 2.5` is the recommended starting point: it retains only the top ~10% of documents by count, but these documents are substantially longer than average and contribute a large share of training tokens.
57
+
58
+ ## Data Fields
59
+
60
+ | Field | Type | Source | Description |
61
+ |---|---|---|---|
62
+ | `text` | string | Both | Main document content |
63
+ | `prediction` | float | Both | Educational quality score (~1-5) |
64
+ | `dataset_source` | string | Both | `"FineWeb2"` or `"FinePDFs"` |
65
+ | `id` | string | Both | Unique document identifier |
66
+ | `file_path` | string | Both | Path to the source WARC or PDF file |
67
+ | `minhash_cluster_size` | int64 | Both | Size of the document's MinHash deduplication cluster (useful for custom upsampling strategies) |
68
+ | `url` | string | FineWeb2 | Source URL |
69
+ | `date` | string | FineWeb2 | Crawl date from Common Crawl |
70
+ | `dump` | string | FineWeb2 | Common Crawl dump identifier |
71
+ | `offset` | int64 | FinePDFs | Byte offset within the source file |
72
+ | `full_doc_lid` | string | FinePDFs | Language ID of the full document |
73
+ | `full_doc_lid_score` | float | FinePDFs | Language ID confidence score |
74
+ | `is_truncated` | bool | FinePDFs | Whether the document was truncated |
75
+ | `duplicate_count` | int64 | FinePDFs | Number of near-duplicate copies found |
76
+
77
+ ### Source Data
78
+
79
+ 1. **[FineWeb2 (Polish slice)](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2)**: ~150 million documents from the Polish portion of FineWeb2, a filtered version of Common Crawl.
80
+ 2. **[FinePDFs (Polish slice)](https://huggingface.co/datasets/HuggingFaceFW/finepdfs)**: ~10 million documents from the Polish portion of FinePDFs, contributing formal and structured text from academic, technical, and institutional sources.
81
+
82
+ ### Annotations
83
+
84
+ The dataset uses machine-generated labels from a custom-trained quality classifier.
85
+
86
+ **Scoring Rubric:**
87
+
88
+ | Score | Category | Definition |
89
+ |---|---|---|
90
+ | 1 | Noise & Commercial | Spam, navigation elements, fictional content, strictly commercial text (advertisements) |
91
+ | 2 | Context-Specific | News, corporate descriptions, product reviews, personal opinions — describes topics without explaining underlying principles |
92
+ | 3 | Instructional | Explains general concepts through specific examples or guides; teaches transferable skills |
93
+ | 4 | Analytical | Analysis of historical patterns, scientific concepts, reasoning methods, or social phenomena |
94
+ | 5 | Foundational | Comprehensive explanations of complex topics and fundamental theories, comparable to high-quality textbook material |
95
+
96
+ **Annotation Process:**
97
+
98
+ 1. **Synthetic dataset generation**: [Gemini-2.0-Flash] was used to annotate 301,357 randomly sampled documents via the Google Batch API. A Chain-of-Thought prompt forced the model to reason about whether text explained underlying principles rather than relying on surface-level academic keywords. The teacher model achieved **accuracy 0.93 / F1 0.76** (positive class: score ≥ 3) on a 340-document gold-standard validation set.
99
+
100
+ 2. **Label distribution** of the synthetic training set (mean score: **1.70**; 90th percentile at score **3.0**):
101
+
102
+ | Score | Count | % |
103
+ |-------|---------|--------|
104
+ | 1 | 129,036 | 42.8% |
105
+ | 2 | 141,722 | 47.0% |
106
+ | 3 | 23,162 | 7.7% |
107
+ | 4 | 7,419 | 2.5% |
108
+ | 5 | 18 | <0.01% |
109
+
110
+ 3. **Classifier training**: [`PKOBP/polish-roberta-8k`](https://huggingface.co/PKOBP/polish-roberta-8k) was fine-tuned for 2 epochs with a regression head. Only the last 4 encoder layers were unfrozen to preserve general linguistic features. Training used fp16 precision on a single NVIDIA L40 GPU (lr=2e-5, cosine schedule, warmup ratio 0.1, weight decay 0.01). The model achieved **F1 = 0.79** on the held-out test set (positive class: score ≥ 2.5).
111
+
112
+ 4. **Large-scale inference**: Scoring ran on NVIDIA RTX 4090 GPUs in fp16, with length-sorted batching to minimize padding overhead (~100 GPU-hours total).
113
+
114
+
115
+ ### Personal and Sensitive Information
116
+
117
+ The dataset is sourced from public web data (FineWeb2) and publicly available PDFs (FinePDFs). As with any large web corpus, it may contain personal or sensitive information. The filtering process does not explicitly remove such content. Users should handle the data in accordance with applicable privacy regulations.
118
+
119
+ ## Pretraining Validation
120
+
121
+ To confirm the dataset produces better models than unfiltered alternatives, we ran controlled pretraining experiments at two scales. All hyperparameters were kept identical across runs - only the dataset composition varied.
122
+
123
+ | Config | Scale | Source | Quality Filter |
124
+ |---|---|---|---|
125
+ | Base-FW2 | 561M | FineWeb2 (Polish slice) | None — unfiltered baseline |
126
+ | HQ-FW2 | 561M | FineWeb2-HQ + FinePDFs-Edu (80/20) | External quality filter |
127
+ | **FinetextPL-Edu** | 561M | FineWeb2 + FinePDFs (Polish slice) | Score ≥ 2.5 (this dataset) |
128
+ | HQ-FW2 | 1.8B | FineWeb2-HQ + FinePDFs-Edu (80/20) | External quality filter |
129
+ | **FinetextPL-Edu** | 1.8B | FineWeb2 + FinePDFs (Polish slice) | Score ≥ 2.5 (this dataset) |
130
+
131
+ Training on FinetextPL-Edu (score ≥ 2.5) consistently outperforms the unfiltered Base-FW2 baseline, particularly on reasoning and knowledge-retrieval tasks (ARC-Challenge-PL, HellaSwag-PL). Full experimental details and benchmark results will be published in the accompanying paper.
132
+
133
+ Evaluation used Bits-per-Byte (bpb) as the primary intrinsic metric, alongside a Polish benchmark suite: MMLU-PL, ARC-Challenge-PL, HellaSwag-PL, GSM8K-PL, Belebele-PL, LLMzSzŁ, PES, and TruthfulQA-PL.
134
+
135
+ <div style="display: flex; gap: 2%;">
136
+ <div style="width: 49%; text-align: center;">
137
+ <img src="./assets/500M-models.png" style="width=100%;"/>
138
+ <p>561M scale benchmark results</p>
139
+ </div>
140
+ <div style="width: 49%; text-align: center;">
141
+ <img src="./assets/1.5B-models.png" style="width=100%;"/>
142
+ <p>1.8B scale benchmark results</p>
143
+ </div>
144
+ </div>
145
+
146
+ ## Acknowledgements
147
+
148
+ We gratefully acknowledge Polish high-performance computing infrastructure PLGrid (HPC Center: ACK Cyfronet AGH) for providing computer facilities and support within computational grant no. PLG/2025/018955.
149
+
150
+ ## Citation
151
+
152
+ This dataset was created as part of an engineering thesis. A formal citation will be provided upon publication. In the meantime, please reference as:
153
+
154
+ ```bibtex
155
+ @misc{finetextpl-edu-2025,
156
+ title = {FinetextPL-Edu: A Polish Educational Corpus for Language Model Pretraining},
157
+ author = {[Miłosz Poruba, Marcel Kowalik]},
158
+ year = {2026},
159
+ note = {Engineering thesis},
160
+ howpublished = {\url{https://huggingface.co/datasets/FinetextPL/FinetextPL-Edu}}
161
+ }
162
+ ```
assets/1.5B-models.png ADDED

Git LFS Details

  • SHA256: 6b0ba8fe5e0a12bea4eacdb5971ac07cbddf88b12e7eb23ae5c9614bea3546a8
  • Pointer size: 131 Bytes
  • Size of remote file: 115 kB
assets/500M-models.png ADDED

Git LFS Details

  • SHA256: d27e7ee7a8d60cc2a46a63166fa9a3ca5d2ddb59652899be90fd1846887d453a
  • Pointer size: 131 Bytes
  • Size of remote file: 160 kB