Datasets:
Milpo1 commited on
Commit ·
2527352
1
Parent(s): 8793bf0
update readme
Browse files
README.md
CHANGED
|
@@ -25,7 +25,7 @@ configs:
|
|
| 25 |
|
| 26 |
> **TL;DR** — ~160 million Polish documents from FineWeb2 and FinePDFs, each annotated with a `prediction` score (1–5) estimating educational value. Filter on `prediction >= 2.5` to retain a quality-focused subset while preserving a robust portion of training tokens. Created as part of an engineering thesis on educational corpus curation for Polish LLM pretraining.
|
| 27 |
|
| 28 |
-
Token volume estimates using
|
| 29 |
- FineWeb2 slice: ~109.8B tokens
|
| 30 |
- FinePDFs slice: ~37.3B tokens
|
| 31 |
|
|
@@ -35,75 +35,32 @@ Token volume estimates using the Bielik tokenizer:
|
|
| 35 |
```python
|
| 36 |
from datasets import load_dataset
|
| 37 |
|
| 38 |
-
|
| 39 |
-
ds = load_dataset("ORG/FinetextPL-Edu", split="train", streaming=True)
|
| 40 |
|
| 41 |
-
#
|
| 42 |
-
# This removes ~90% of documents by count but preserves a substantial
|
| 43 |
-
# share of tokens, because high-scoring documents are significantly longer.
|
| 44 |
edu = ds.filter(lambda x: x["prediction"] >= 2.5)
|
| 45 |
|
| 46 |
-
#
|
| 47 |
-
web_only = ds.filter(lambda x: x["dataset_source"] == "
|
| 48 |
-
pdfs_only = ds.filter(lambda x: x["dataset_source"] == "
|
| 49 |
```
|
| 50 |
|
| 51 |
---
|
| 52 |
|
| 53 |
## Dataset Description
|
| 54 |
|
| 55 |
-
**FinetextPL-Edu** is a large-scale Polish corpus derived from the Polish subsets of FineWeb2 and FinePDFs. The dataset contains approximately 160 million documents, each annotated with a scalar score representing its "educational value
|
| 56 |
|
| 57 |
The primary goal of this dataset is to provide a resource for training Polish language models with an emphasis on factual grounding and reasoning ability. It was created by applying a methodology inspired by the [FineWeb-Edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu) project to the Polish language, addressing the need for systematically filtered, high-quality native corpora.
|
| 58 |
|
| 59 |
-
|
| 60 |
|
| 61 |
-
##
|
| 62 |
-
|
| 63 |
-
The core feature of this dataset is the `prediction` field — a float score reflecting the 1–5 educational annotation rubric. The regression head output may occasionally fall slightly outside this range.
|
| 64 |
-
|
| 65 |
-
A threshold of `score >= 2.5` is the recommended starting point: it retains only the top ~10% of documents by count, but these documents are substantially longer than average and contribute a disproportionately large share of training tokens.
|
| 66 |
-
|
| 67 |
-
## Dataset Structure
|
| 68 |
-
|
| 69 |
-
<div style="display: flex; gap: 2%;">
|
| 70 |
-
<div style="width: 49%; text-align: center;">
|
| 71 |
-
<img src="./assets/doc_count_by_prediction_log.png" style="width=100%;"/>
|
| 72 |
-
<p>Document count by prediction FineWeb2</p>
|
| 73 |
-
</div>
|
| 74 |
-
<div style="width: 49%; text-align: center;">
|
| 75 |
-
<img src="./assets/doc_count_by_prediction_log_finepdfs.png" style="width=100%;"/>
|
| 76 |
-
<p>Document count by prediction FinePDFs</p>
|
| 77 |
-
</div>
|
| 78 |
-
</div>
|
| 79 |
-
|
| 80 |
-
<div style="display: flex; gap: 2%;">
|
| 81 |
-
<div style="width: 49%; text-align: center;">
|
| 82 |
-
<img src="./assets/avg_char_count_per_document_by_prediction.png" style="width=100%;"/>
|
| 83 |
-
<p>Average character count by prediction FineWeb2</p>
|
| 84 |
-
</div>
|
| 85 |
-
<div style="width: 49%; text-align: center;">
|
| 86 |
-
<img src="./assets/avg_char_count_per_document_by_prediction_pdf.png" style="width=100%;"/>
|
| 87 |
-
<p>Average character count by prediction FinePDFs</p>
|
| 88 |
-
</div>
|
| 89 |
-
</div>
|
| 90 |
-
|
| 91 |
-
The charts reveal a key insight: **document length grows strongly with educational score**. Low-scoring documents (S < 2.0) are extremely short — typically under 500 characters (navigation elements, social media comments). As scores exceed 3.5, average length grows dramatically:
|
| 92 |
-
|
| 93 |
-
- **FineWeb2**: peak ~6,000 characters per document at S ≥ 3.5
|
| 94 |
-
- **FinePDFs**: high-scoring PDFs (S > 2.5) average 30,000–35,000 characters
|
| 95 |
-
|
| 96 |
-
This means a strict threshold removes the vast majority of *documents* while preserving a robust portion of the actual *tokens* — a useful property for curating pretraining data.
|
| 97 |
-
|
| 98 |
-
> **Note on length bias:** The classifier was fine-tuned on labels generated by an LLM with a 10,000-character context window, but scores the corpus using only the first 1,024 tokens per document. The positive score–length correlation partly reflects this supervisory bias: the classifier detects stylistic markers in the document opening that act as proxies for the length-correlated preferences of the labeling model.
|
| 99 |
-
|
| 100 |
-
|
| 101 |
-
### Data Fields
|
| 102 |
|
| 103 |
| Field | Type | Source | Description |
|
| 104 |
|---|---|---|---|
|
| 105 |
| `text` | string | Both | Main document content |
|
| 106 |
-
| `prediction` | float | Both | Educational quality score (~1
|
| 107 |
| `dataset_source` | string | Both | `"FineWeb2"` or `"FinePDFs"` |
|
| 108 |
| `id` | string | Both | Unique document identifier |
|
| 109 |
| `file_path` | string | Both | Path to the source WARC or PDF file |
|
|
@@ -117,16 +74,6 @@ This means a strict threshold removes the vast majority of *documents* while pre
|
|
| 117 |
| `is_truncated` | bool | FinePDFs | Whether the document was truncated |
|
| 118 |
| `duplicate_count` | int64 | FinePDFs | Number of near-duplicate copies found |
|
| 119 |
|
| 120 |
-
### Data Splits
|
| 121 |
-
|
| 122 |
-
The dataset is provided as a single corpus — it is not pre-split into train/validation/test sets. It contains the full Polish slices of FineWeb2 (~150M documents) and FinePDFs (~10M documents) with their corresponding educational scores.
|
| 123 |
-
|
| 124 |
-
## Dataset Creation
|
| 125 |
-
|
| 126 |
-
### Curation Rationale
|
| 127 |
-
|
| 128 |
-
The quality of training data is a primary factor in language model performance. While several high-quality filtered English datasets exist (e.g., RefinedWeb, FineWeb-Edu), such systematic filtering has not been extensively applied to Polish. This work extends the principles of FineWeb-Edu to a morphologically complex, medium-resourced European language, creating a resource optimized for factual grounding and reasoning.
|
| 129 |
-
|
| 130 |
### Source Data
|
| 131 |
|
| 132 |
1. **[FineWeb2 (Polish slice)](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2)**: ~150 million documents from the Polish portion of FineWeb2, a filtered version of Common Crawl.
|
|
@@ -134,7 +81,7 @@ The quality of training data is a primary factor in language model performance.
|
|
| 134 |
|
| 135 |
### Annotations
|
| 136 |
|
| 137 |
-
The dataset uses machine-generated labels from a custom-trained quality classifier.
|
| 138 |
|
| 139 |
**Scoring Rubric:**
|
| 140 |
|
|
@@ -148,7 +95,7 @@ The dataset uses machine-generated labels from a custom-trained quality classifi
|
|
| 148 |
|
| 149 |
**Annotation Process:**
|
| 150 |
|
| 151 |
-
1. **Synthetic dataset generation**: [Gemini-2.0-Flash]
|
| 152 |
|
| 153 |
2. **Label distribution** of the synthetic training set (mean score: **1.70**; 90th percentile at score **3.0**):
|
| 154 |
|
|
@@ -160,11 +107,9 @@ The dataset uses machine-generated labels from a custom-trained quality classifi
|
|
| 160 |
| 4 | 7,419 | 2.5% |
|
| 161 |
| 5 | 18 | <0.01% |
|
| 162 |
|
| 163 |
-
This confirms that only the top ~10% of web-crawled Polish text meets even a moderate standard of educational value.
|
| 164 |
-
|
| 165 |
3. **Classifier training**: [`PKOBP/polish-roberta-8k`](https://huggingface.co/PKOBP/polish-roberta-8k) was fine-tuned for 2 epochs with a regression head. Only the last 4 encoder layers were unfrozen to preserve general linguistic features. Training used fp16 precision on a single NVIDIA L40 GPU (lr=2e-5, cosine schedule, warmup ratio 0.1, weight decay 0.01). The model achieved **F1 = 0.79** on the held-out test set (positive class: score ≥ 2.5).
|
| 166 |
|
| 167 |
-
4. **Large-scale inference**:
|
| 168 |
|
| 169 |
|
| 170 |
### Personal and Sensitive Information
|
|
@@ -173,15 +118,15 @@ The dataset is sourced from public web data (FineWeb2) and publicly available PD
|
|
| 173 |
|
| 174 |
## Pretraining Validation
|
| 175 |
|
| 176 |
-
To confirm the dataset produces better models than unfiltered alternatives, we ran controlled pretraining experiments at two scales. All hyperparameters were kept identical across runs
|
| 177 |
|
| 178 |
| Config | Scale | Source | Quality Filter |
|
| 179 |
|---|---|---|---|
|
| 180 |
| Base-FW2 | 561M | FineWeb2 (Polish slice) | None — unfiltered baseline |
|
| 181 |
| HQ-FW2 | 561M | FineWeb2-HQ + FinePDFs-Edu (80/20) | External quality filter |
|
| 182 |
| **FinetextPL-Edu** | 561M | FineWeb2 + FinePDFs (Polish slice) | Score ≥ 2.5 (this dataset) |
|
| 183 |
-
| HQ-FW2 | 1.
|
| 184 |
-
| **FinetextPL-Edu** | 1.
|
| 185 |
|
| 186 |
Training on FinetextPL-Edu (score ≥ 2.5) consistently outperforms the unfiltered Base-FW2 baseline, particularly on reasoning and knowledge-retrieval tasks (ARC-Challenge-PL, HellaSwag-PL). Full experimental details and benchmark results will be published in the accompanying paper.
|
| 187 |
|
|
@@ -194,17 +139,10 @@ Evaluation used Bits-per-Byte (bpb) as the primary intrinsic metric, alongside a
|
|
| 194 |
</div>
|
| 195 |
<div style="width: 49%; text-align: center;">
|
| 196 |
<img src="./assets/1.5B-models.png" style="width=100%;"/>
|
| 197 |
-
<p>1.
|
| 198 |
</div>
|
| 199 |
</div>
|
| 200 |
|
| 201 |
-
|
| 202 |
-
## Limitations
|
| 203 |
-
|
| 204 |
-
- **Length bias**: The classifier scores only the first 1,024 tokens of each document, while the teacher LLM labeled using up to ~10,000 characters. The positive score–length correlation partly reflects this supervisory artifact — very long documents may receive inflated scores regardless of quality.
|
| 205 |
-
- **No explicit PII removal**: The dataset inherits the privacy characteristics of its source corpora (FineWeb2 and FinePDFs) and does not apply additional personal data filtering.
|
| 206 |
-
- **Polish-centric scoring**: The classifier is optimized for Polish. Documents with significant code-switching or mixed-language content are not explicitly evaluated.
|
| 207 |
-
|
| 208 |
## Acknowledgements
|
| 209 |
|
| 210 |
We gratefully acknowledge Polish high-performance computing infrastructure PLGrid (HPC Center: ACK Cyfronet AGH) for providing computer facilities and support within computational grant no. PLG/2025/018955.
|
|
|
|
| 25 |
|
| 26 |
> **TL;DR** — ~160 million Polish documents from FineWeb2 and FinePDFs, each annotated with a `prediction` score (1–5) estimating educational value. Filter on `prediction >= 2.5` to retain a quality-focused subset while preserving a robust portion of training tokens. Created as part of an engineering thesis on educational corpus curation for Polish LLM pretraining.
|
| 27 |
|
| 28 |
+
Token volume estimates using APT4 tokenizer:
|
| 29 |
- FineWeb2 slice: ~109.8B tokens
|
| 30 |
- FinePDFs slice: ~37.3B tokens
|
| 31 |
|
|
|
|
| 35 |
```python
|
| 36 |
from datasets import load_dataset
|
| 37 |
|
| 38 |
+
ds = load_dataset("FinetextPL/FinetextPL-Edu", split="train", streaming=True)
|
|
|
|
| 39 |
|
| 40 |
+
# We recommend filtering by scores >= 2.5
|
|
|
|
|
|
|
| 41 |
edu = ds.filter(lambda x: x["prediction"] >= 2.5)
|
| 42 |
|
| 43 |
+
# You may filter by source
|
| 44 |
+
web_only = ds.filter(lambda x: x["dataset_source"] == "fineweb2")
|
| 45 |
+
pdfs_only = ds.filter(lambda x: x["dataset_source"] == "finepdfs")
|
| 46 |
```
|
| 47 |
|
| 48 |
---
|
| 49 |
|
| 50 |
## Dataset Description
|
| 51 |
|
| 52 |
+
**FinetextPL-Edu** is a large-scale Polish corpus derived from the Polish subsets of FineWeb2 and FinePDFs. The dataset contains approximately 160 million documents, each annotated with a scalar score representing its "educational value". This score was generated by a custom-trained RoBERTa classifier based on [PKOBP/polish-roberta-8k](https://huggingface.co/PKOBP/polish-roberta-8k), designed to identify content suitable for training high-quality language models.
|
| 53 |
|
| 54 |
The primary goal of this dataset is to provide a resource for training Polish language models with an emphasis on factual grounding and reasoning ability. It was created by applying a methodology inspired by the [FineWeb-Edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu) project to the Polish language, addressing the need for systematically filtered, high-quality native corpora.
|
| 55 |
|
| 56 |
+
The core feature of this dataset is the `prediction` field — a float score reflecting the 1-5 educational annotation rubric. A threshold of `score >= 2.5` is the recommended starting point: it retains only the top ~10% of documents by count, but these documents are substantially longer than average and contribute a large share of training tokens.
|
| 57 |
|
| 58 |
+
## Data Fields
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 59 |
|
| 60 |
| Field | Type | Source | Description |
|
| 61 |
|---|---|---|---|
|
| 62 |
| `text` | string | Both | Main document content |
|
| 63 |
+
| `prediction` | float | Both | Educational quality score (~1-5) |
|
| 64 |
| `dataset_source` | string | Both | `"FineWeb2"` or `"FinePDFs"` |
|
| 65 |
| `id` | string | Both | Unique document identifier |
|
| 66 |
| `file_path` | string | Both | Path to the source WARC or PDF file |
|
|
|
|
| 74 |
| `is_truncated` | bool | FinePDFs | Whether the document was truncated |
|
| 75 |
| `duplicate_count` | int64 | FinePDFs | Number of near-duplicate copies found |
|
| 76 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 77 |
### Source Data
|
| 78 |
|
| 79 |
1. **[FineWeb2 (Polish slice)](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2)**: ~150 million documents from the Polish portion of FineWeb2, a filtered version of Common Crawl.
|
|
|
|
| 81 |
|
| 82 |
### Annotations
|
| 83 |
|
| 84 |
+
The dataset uses machine-generated labels from a custom-trained quality classifier.
|
| 85 |
|
| 86 |
**Scoring Rubric:**
|
| 87 |
|
|
|
|
| 95 |
|
| 96 |
**Annotation Process:**
|
| 97 |
|
| 98 |
+
1. **Synthetic dataset generation**: [Gemini-2.0-Flash] was used to annotate 301,357 randomly sampled documents via the Google Batch API. A Chain-of-Thought prompt forced the model to reason about whether text explained underlying principles rather than relying on surface-level academic keywords. The teacher model achieved **accuracy 0.93 / F1 0.76** (positive class: score ≥ 3) on a 340-document gold-standard validation set.
|
| 99 |
|
| 100 |
2. **Label distribution** of the synthetic training set (mean score: **1.70**; 90th percentile at score **3.0**):
|
| 101 |
|
|
|
|
| 107 |
| 4 | 7,419 | 2.5% |
|
| 108 |
| 5 | 18 | <0.01% |
|
| 109 |
|
|
|
|
|
|
|
| 110 |
3. **Classifier training**: [`PKOBP/polish-roberta-8k`](https://huggingface.co/PKOBP/polish-roberta-8k) was fine-tuned for 2 epochs with a regression head. Only the last 4 encoder layers were unfrozen to preserve general linguistic features. Training used fp16 precision on a single NVIDIA L40 GPU (lr=2e-5, cosine schedule, warmup ratio 0.1, weight decay 0.01). The model achieved **F1 = 0.79** on the held-out test set (positive class: score ≥ 2.5).
|
| 111 |
|
| 112 |
+
4. **Large-scale inference**: Scoring ran on NVIDIA RTX 4090 GPUs in fp16, with length-sorted batching to minimize padding overhead (~100 GPU-hours total).
|
| 113 |
|
| 114 |
|
| 115 |
### Personal and Sensitive Information
|
|
|
|
| 118 |
|
| 119 |
## Pretraining Validation
|
| 120 |
|
| 121 |
+
To confirm the dataset produces better models than unfiltered alternatives, we ran controlled pretraining experiments at two scales. All hyperparameters were kept identical across runs - only the dataset composition varied.
|
| 122 |
|
| 123 |
| Config | Scale | Source | Quality Filter |
|
| 124 |
|---|---|---|---|
|
| 125 |
| Base-FW2 | 561M | FineWeb2 (Polish slice) | None — unfiltered baseline |
|
| 126 |
| HQ-FW2 | 561M | FineWeb2-HQ + FinePDFs-Edu (80/20) | External quality filter |
|
| 127 |
| **FinetextPL-Edu** | 561M | FineWeb2 + FinePDFs (Polish slice) | Score ≥ 2.5 (this dataset) |
|
| 128 |
+
| HQ-FW2 | 1.8B | FineWeb2-HQ + FinePDFs-Edu (80/20) | External quality filter |
|
| 129 |
+
| **FinetextPL-Edu** | 1.8B | FineWeb2 + FinePDFs (Polish slice) | Score ≥ 2.5 (this dataset) |
|
| 130 |
|
| 131 |
Training on FinetextPL-Edu (score ≥ 2.5) consistently outperforms the unfiltered Base-FW2 baseline, particularly on reasoning and knowledge-retrieval tasks (ARC-Challenge-PL, HellaSwag-PL). Full experimental details and benchmark results will be published in the accompanying paper.
|
| 132 |
|
|
|
|
| 139 |
</div>
|
| 140 |
<div style="width: 49%; text-align: center;">
|
| 141 |
<img src="./assets/1.5B-models.png" style="width=100%;"/>
|
| 142 |
+
<p>1.8B scale benchmark results</p>
|
| 143 |
</div>
|
| 144 |
</div>
|
| 145 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 146 |
## Acknowledgements
|
| 147 |
|
| 148 |
We gratefully acknowledge Polish high-performance computing infrastructure PLGrid (HPC Center: ACK Cyfronet AGH) for providing computer facilities and support within computational grant no. PLG/2025/018955.
|
assets/avg_char_count_per_document_by_prediction.png
DELETED
Git LFS Details
|
assets/avg_char_count_per_document_by_prediction_pdf.png
DELETED
Git LFS Details
|
assets/doc_count_by_prediction_log.png
DELETED
Git LFS Details
|
assets/doc_count_by_prediction_log_finepdfs.png
DELETED
Git LFS Details
|