LeMoussel commited on
Commit
4635d70
ยท
verified ยท
1 Parent(s): 2342e07

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +72 -68
README.md CHANGED
@@ -1,68 +1,72 @@
1
- ---
2
- license: cc-by-4.0
3
- task_categories:
4
- - text-generation
5
- language:
6
- - fr
7
- pretty_name: ๐Ÿ“š FineWiki
8
- size_categories:
9
- - 100K<n<1M
10
- configs:
11
- - config_name: fr_removed
12
- data_files:
13
- - split: train
14
- path: data/fr_removed.parquet
15
- - config_name: fr
16
- data_files:
17
- - split: train
18
- path: data/fr.parquet
19
- ---
20
-
21
- <center>
22
- <img src="https://huggingface.co/datasets/LeMoussel/finewiki/resolve/main/finewik-logo.png" alt="FineWiki: High-quality text pretraining dataset derived from the French edition of Wikipedia">
23
- </center>
24
-
25
- # ๐Ÿ“š FineWiki
26
-
27
- ## Dataset Overview
28
-
29
- **FineWiki** is a high-quality French-language dataset designed for pretraining and NLP tasks. It is derived from the French edition of Wikipedia using the *Wikipedia Structured Contents* dataset released by the Wikimedia Foundation on [Kaggle](https://www.kaggle.com/datasets/wikimedia-foundation/wikipedia-structured-contents).
30
-
31
- Each entry is a structured JSON line representing a full Wikipedia article, parsed and cleaned from HTML snapshots provided by [Wikimedia Enterprise](https://enterprise.wikimedia.com/docs/snapshot/).
32
-
33
- The dataset has been carefully filtered and deduplicated. It retains only the most relevant textual content such as article summaries, short descriptions, main image URLs, infoboxes, and cleaned section texts. Non-textual or noisy elements (like references, citations, and markdown artifacts) have been removed to provide a cleaner signal for NLP model training.
34
-
35
- To encourage reusability and transparency, we also provide a version containing the articles **excluded** during filtering (config: `fr_removed`). This enables users to reapply their own filtering strategies. The full data = filtered + removed sets.
36
-
37
- ## Data Structure
38
-
39
- * **Language**: French (`fr`)
40
- * **Fields**:
41
-
42
- * `TODO`: TODO
43
- * `description`: Short description
44
-
45
- ## Source and Processing
46
-
47
- The original data is sourced from the [Wikipedia Structured Contents (Kaggle)](https://www.kaggle.com/datasets/wikimedia-foundation/wikipedia-structured-contents) dataset. It was extracted from HTML snapshots provided by Wikimedia Enterprise, then parsed and cleaned to retain only the most useful and structured textual elements for machine learning.
48
-
49
- The dataset has been carefully filtered and deduplicated. The filtering follows the same rules as those applied with [FineWeb2](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2) using the [Datatrove](https://github.com/huggingface/datatrove) library.
50
-
51
- This preprocessing step aims to improve readability, consistency, and structure, helping language models learn more effectively.
52
-
53
- ## Data Splits
54
-
55
- Currently, the dataset is provided as a single `train` split. No predefined validation or test sets are included. Users are encouraged to create their own splits as needed.
56
-
57
- ## How to Use
58
-
59
- You can load FineWiki using the ๐Ÿค— `datasets` library like this:
60
-
61
- ```python
62
- from datasets import load_dataset
63
-
64
- dataset = load_dataset("LeMoussel/finewiki", split="train")
65
-
66
- # Example: print the first article
67
- print(dataset[0])
68
- ```
 
 
 
 
 
1
+ ---
2
+ license: cc-by-4.0
3
+ task_categories:
4
+ - text-generation
5
+ language:
6
+ - fr
7
+ pretty_name: ๐Ÿ“š FineWiki
8
+ size_categories:
9
+ - 100K<n<1M
10
+ configs:
11
+ - config_name: fr_removed
12
+ data_files:
13
+ - split: train
14
+ path: data/fr_removed.parquet
15
+ - config_name: fr
16
+ data_files:
17
+ - split: train
18
+ path: data/fr.parquet
19
+ ---
20
+
21
+ <center>
22
+ <img src="https://huggingface.co/datasets/LeMoussel/finewiki/resolve/main/finewik-logo.png" alt="FineWiki: High-quality text pretraining dataset derived from the French edition of Wikipedia">
23
+ </center>
24
+
25
+ # ๐Ÿ“š FineWiki
26
+
27
+ ## Dataset Overview
28
+
29
+ **FineWiki** is a high-quality French-language dataset designed for pretraining and NLP tasks. It is derived from the French edition of Wikipedia using the *Wikipedia Structured Contents* dataset released by the Wikimedia Foundation on [Kaggle](https://www.kaggle.com/datasets/wikimedia-foundation/wikipedia-structured-contents).
30
+
31
+ Each entry is a structured JSON line representing a full Wikipedia article, parsed and cleaned from HTML snapshots provided by [Wikimedia Enterprise](https://enterprise.wikimedia.com/docs/snapshot/).
32
+
33
+ The dataset has been carefully filtered and deduplicated. It retains only the most relevant textual content such as article summaries, short descriptions, main image URLs, infoboxes, and cleaned section texts. Non-textual or noisy elements (like references, citations, and markdown artifacts) have been removed to provide a cleaner signal for NLP model training.
34
+
35
+ To encourage reusability and transparency, we also provide a version containing the articles **excluded** during filtering (config: `fr_removed`). This enables users to reapply their own filtering strategies. The full data = filtered + removed sets.
36
+
37
+ ## Data Structure
38
+
39
+ * **Language**: French (`fr`)
40
+ * **Fields**:
41
+
42
+ * `text` Article content.
43
+ * `id`: ID of the article.
44
+ * `url`: URL of the article.
45
+ * `date`: Date of the article.
46
+ * `file_path`: Reference of the original file in wiki namespace
47
+ * `description`: One-sentence description of the article for quick reference.
48
+
49
+ ## Source and Processing
50
+
51
+ The original data is sourced from the [Wikipedia Structured Contents (Kaggle)](https://www.kaggle.com/datasets/wikimedia-foundation/wikipedia-structured-contents) dataset. It was extracted from HTML snapshots provided by Wikimedia Enterprise, then parsed and cleaned to retain only the most useful and structured textual elements for machine learning.
52
+
53
+ The dataset has been carefully filtered and deduplicated. The filtering follows the same rules as those applied with [FineWeb2](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2) using the [Datatrove](https://github.com/huggingface/datatrove) library.
54
+
55
+ This preprocessing step aims to improve readability, consistency, and structure, helping language models learn more effectively.
56
+
57
+ ## Data Splits
58
+
59
+ Currently, the dataset is provided as a single `train` split. No predefined validation or test sets are included. Users are encouraged to create their own splits as needed.
60
+
61
+ ## How to Use
62
+
63
+ You can load FineWiki using the ๐Ÿค— `datasets` library like this:
64
+
65
+ ```python
66
+ from datasets import load_dataset
67
+
68
+ dataset = load_dataset("LeMoussel/finewiki", split="train")
69
+
70
+ # Example: print the first article
71
+ print(dataset[0])
72
+ ```