finewiki / README.md
LeMoussel's picture
Upload README.md
4635d70 verified
metadata
license: cc-by-4.0
task_categories:
  - text-generation
language:
  - fr
pretty_name: πŸ“š FineWiki
size_categories:
  - 100K<n<1M
configs:
  - config_name: fr_removed
    data_files:
      - split: train
        path: data/fr_removed.parquet
  - config_name: fr
    data_files:
      - split: train
        path: data/fr.parquet
FineWiki: High-quality text pretraining dataset derived from the French edition of Wikipedia

πŸ“š FineWiki

Dataset Overview

FineWiki is a high-quality French-language dataset designed for pretraining and NLP tasks. It is derived from the French edition of Wikipedia using the Wikipedia Structured Contents dataset released by the Wikimedia Foundation on Kaggle.

Each entry is a structured JSON line representing a full Wikipedia article, parsed and cleaned from HTML snapshots provided by Wikimedia Enterprise.

The dataset has been carefully filtered and deduplicated. It retains only the most relevant textual content such as article summaries, short descriptions, main image URLs, infoboxes, and cleaned section texts. Non-textual or noisy elements (like references, citations, and markdown artifacts) have been removed to provide a cleaner signal for NLP model training.

To encourage reusability and transparency, we also provide a version containing the articles excluded during filtering (config: fr_removed). This enables users to reapply their own filtering strategies. The full data = filtered + removed sets.

Data Structure

  • Language: French (fr)

  • Fields:

    • text Article content.
    • id: ID of the article.
    • url: URL of the article.
    • date: Date of the article.
    • file_path: Reference of the original file in wiki namespace
    • description: One-sentence description of the article for quick reference.

Source and Processing

The original data is sourced from the Wikipedia Structured Contents (Kaggle) dataset. It was extracted from HTML snapshots provided by Wikimedia Enterprise, then parsed and cleaned to retain only the most useful and structured textual elements for machine learning.

The dataset has been carefully filtered and deduplicated. The filtering follows the same rules as those applied with FineWeb2 using the Datatrove library.

This preprocessing step aims to improve readability, consistency, and structure, helping language models learn more effectively.

Data Splits

Currently, the dataset is provided as a single train split. No predefined validation or test sets are included. Users are encouraged to create their own splits as needed.

How to Use

You can load FineWiki using the πŸ€— datasets library like this:

from datasets import load_dataset

dataset = load_dataset("LeMoussel/finewiki", split="train")

# Example: print the first article
print(dataset[0])