kowalikmarcel commited on
Commit
1a8fa28
·
verified ·
1 Parent(s): 5536f6c

Dataset card

Browse files

@Milpo1 sprawdź

Files changed (1) hide show
  1. README.md +72 -18
README.md CHANGED
@@ -14,21 +14,75 @@ configs:
14
  path: data/*
15
 
16
  ---
17
- @misc{kydlicek2025finepdfs,
18
- title={FinePDFs},
19
- author={Hynek Kydl{\'\i}{\v{c}}ek and Guilherme Penedo and Leandro von Werra},
20
- year={2025},
21
- publisher = {Hugging Face},
22
- journal = {Hugging Face repository},
23
- howpublished = {\url{https://huggingface.co/datasets/HuggingFaceFW/finepdfs_edu}}
24
- }
25
-
26
- @misc{penedo2025fineweb2pipelinescale,
27
- title={FineWeb2: One Pipeline to Scale Them All -- Adapting Pre-Training Data Processing to Every Language},
28
- author={Guilherme Penedo and Hynek Kydlíček and Vinko Sabolčec and Bettina Messmer and Negar Foroutan and Amir Hossein Kargaran and Colin Raffel and Martin Jaggi and Leandro Von Werra and Thomas Wolf},
29
- year={2025},
30
- eprint={2506.20920},
31
- archivePrefix={arXiv},
32
- primaryClass={cs.CL},
33
- url={https://arxiv.org/abs/2506.20920},
34
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14
  path: data/*
15
 
16
  ---
17
+
18
+ # FinetextPL-Edu
19
+
20
+ ## Dataset Description
21
+
22
+ **FinetextPL-Edu** is a large-scale Polish corpus derived from the Polish subsets of FineWeb2 and FinePDFs. The dataset contains approximately 160 million documents, each annotated with a scalar score representing its "educational value." This score was generated by a custom-trained classifier designed to identify content suitable for training high-quality language models.
23
+
24
+ The primary goal of this dataset is to provide a resource for training Polish language models with an emphasis on factual grounding and reasoning ability. It was created by applying a methodology inspired by the FineWeb-Edu project to the Polish language, addressing the need for systematically filtered, high-quality native corpora.
25
+
26
+ This release includes the full source corpora (referred to as **Polish FineWeb**) with the predicted educational score for each document, allowing researchers to filter the data at any desired threshold.
27
+
28
+ ### How to Use the Dataset
29
+
30
+ The core feature of this dataset is the `prediction` field, which is a float score between 1 and 5. Users can filter the dataset based on this score to create subsets of varying quality and size. A common starting point, as suggested in the source research, is a threshold of `score >= 2.5` to retain a robust portion of high-quality tokens while filtering out a majority of low-quality documents.
31
+
32
+ ## Dataset Structure
33
+
34
+ ### Data Fields
35
+
36
+ * `text` (string): The main content of the document.
37
+ * `prediction` (float): The predicted educational quality score, ranging from 0 to 5. This is the output of the trained RoBERTa-based classifier.
38
+ * `dataset_source` (string): The source of the document, either `FineWeb2` or `FinePDFs`.
39
+ * `id` (string): A unique identifier for the document.
40
+ * `url` (string): The source URL of the document (primarily for `FineWeb2` data).
41
+ * `date` (string): The crawl date from the Common Crawl dump.
42
+ * `dump` (string): The identifier for the Common Crawl dump the document originated from.
43
+ * `file_path` (string): The path to the source WARC or PDF file.
44
+ * `offset` (int64): The offset within the source file (specific to `FinePDFs`).
45
+ * `full_doc_lid` (string): The language ID of the full document (specific to `FinePDFs`).
46
+ * `full_doc_lid_score` (float): The confidence score for the language ID (specific to `FinePDFs`).
47
+ * `is_truncated` (bool): A flag indicating if the document was truncated (specific to `FinePDFs`).
48
+ * `minhash_cluster_size` (int64): The size of the document's minhash cluster, used for deduplication.
49
+ * `duplicate_count` (int64): The count of duplicates found for the document (specific to `FinePDFs`).
50
+
51
+ ### Data Splits
52
+
53
+ The dataset is not pre-split into train/validation/test sets. It is provided as a single, comprehensive corpus containing the full Polish slices of FineWeb2 (~150M documents) and FinePDFs (~10M documents) with their corresponding educational scores.
54
+
55
+ ## Dataset Creation
56
+
57
+ ### Curation Rationale
58
+
59
+ The quality of training data is a primary factor in language model performance. While several high-quality filtered English datasets exist (e.g., RefinedWeb, FineWeb-Edu), such systematic filtering has not been extensively studied for Polish. This work extends the principles of FineWeb-Edu to a morphologically complex, medium-resourced European language. The goal was to create a compact, high-quality dataset optimized for factual grounding and reasoning by training a classifier to estimate the educational value of web documents.
60
+
61
+ ### Source Data
62
+
63
+ The corpus is a combination of two sources:
64
+ 1. **[FineWeb2 (Polish slice)](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2)**: Approximately 150 million documents from the Polish portion of the FineWeb2 dataset, which is a filtered version of Common Crawl.
65
+ 2. **[FinePDFs (Polish slice)](https://huggingface.co/datasets/HuggingFaceFW/finepdfs)**: Approximately 10 million documents from the Polish portion of the FinePDFs dataset, contributing more formal and structured text from academic, technical, and institutional sources.
66
+
67
+ ### Annotations
68
+
69
+ The dataset does not contain manual annotations. Instead, it features machine-generated labels (`prediction` scores) from a custom-trained quality classifier.
70
+
71
+ **Annotation Process (Classifier Training):**
72
+
73
+ 1. **Defining Educational Value**: A 5-point scoring system similar to [FineWeb-Edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu) was used, based on the question: "Would this content appear in a school textbook or educational curriculum?"
74
+ * **Score 1**: Noise & Commercial
75
+ * **Score 2**: Context-Specific (News, reviews, opinions)
76
+ * **Score 3**: Instructional
77
+ * **Score 4**: Analytical
78
+ * **Score 5**: Foundational (Textbook-level explanations)
79
+
80
+ 2. **Synthetic Dataset Generation**: A "teacher model" (**Gemini-2.0-Flash**) was used to annotate a random sample of 301,357 documents from the source corpus. A Chain-of-Thought (CoT) prompt was used to ensure the model reasoned about the content's underlying principles rather than surface-level keywords.
81
+
82
+ 3. **Classifier Training**: A RoBERTa-based model for Polish ([`PKOBP/polish-roberta-8k`](https://huggingface.co/PKOBP/polish-roberta-8k)) was fine-tuned on this synthetic dataset. A regression head was added to predict the scalar educational score. The model was trained for 2 epochs.
83
+
84
+ 4. **Large-Scale Inference**: The trained classifier was used to score the entire ~160M document corpus. To make this computationally feasible, all documents were truncated to the first 1024 tokens. This truncation resulted in a negligible performance drop on a test set (F1 score from 0.79 to 0.78).
85
+
86
+ ### Personal and Sensitive Information
87
+
88
+ The dataset is sourced from public web data via Common Crawl (FineWeb2) and publicly available PDFs (FinePDFs). As with any large web corpus, it may contain personal or sensitive information. The filtering process does not explicitly remove such information. Users should be aware of this and handle the data in accordance with privacy best practices.