wikisource_fr / README.md
LeMoussel's picture
Upload README.md with huggingface_hub
c364819 verified
metadata
language:
  - fr
license: cc-by-sa-4.0
size_categories: 100K<n<1M
task_categories:
  - text-generation
task_ids:
  - language-modeling
  - masked-language-modeling
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train/*.parquet
dataset_info:
  - config_name: default
    features:
      - name: title
        dtype: string
      - name: authors
        dtype: string
      - name: identifier
        dtype: int64
      - name: date_created
        dtype: string
      - name: wiki_url
        dtype: string
      - name: text
        dtype: string
      - name: quality_signals
        dtype: string
      - name: version_id
        dtype: int64
    splits:
      - name: train
        num_bytes: 7974052431
        num_examples: 472517
    download_size: 4584532829
    dataset_size: 7974052431

Wikisource FR Dataset

This dataset provides a cleaned plain-text version of French open texts from fr.wikisource.org. The content is distributed without HTML tags or MediaWiki templates, and only retains minimal Markdown syntax (headers, lists, tables) to facilitate downstream NLP and LLM usage.

The dataset is built using the Wikimedia Enterprise Snapshot APIs which allow retrieving complete Wikimedia projects as a database dumps file.

Dataset Structure

Data Instances

Each record corresponds to a single Wikisource document or work.

Example:

{
  'title': ' De la Chasse (Trad. Talbot)/06',
  'authors': 'Xénophon',
  'identifier': 1608939,
  'date_created': ' 2013-11-03T08:40:48Z',
  'wiki_url': ' https://fr.wikisource.org/wiki/De_la_Chasse_(Trad._Talbot)/06',
  'text': '### CHAPITRE VI.\nDe l’armure des chiens, du temps propre à la quête, du garde-filet, ...',
  'quality_signals': '{"ccnet_perplexity": 213.77, "num_tokens": 2455, "doc_length": 9363}',
}

Data Fields

The data fields are consistent across all configurations:

  • title (str): Title of the text.
  • authors (str): Authors of the text.
  • identifier (int64): ID of the text.
  • wiki_url (str): URL of the text on Wikisource.
  • date_created (str): Date of creation of the text.
  • text (str): Content of the text.
  • quality_signals (str): Quality signals of the text.

Example Usage (Python)

Load the full dataset:

import datasets

ds = datasets.load_dataset("LeMoussel/wikisource_fr", split="train")

Intended Use

Suitable for pretraining French LLMs. This dataset is intended for:

  • training and pretraining French language models (LLMs, MLMs),

  • evaluating language models on literary and historical French texts,

  • NLP research tasks such as text generation, summarization, or segmentation.

It does not contain personal data and is exclusively composed of freely licensed texts from Wikisource.

Dataset Statistics

The following statistics are provided to facilitate LLM pretraining planning. Exact values may slightly vary depending on the tokenizer and preprocessing strategy.

  • Total size of the dataset (in memory): ~7.43 GB

  • Total size of the dataset (on disk): ~4.27 GB

  • Total number of documents: 472 517 texts

  • Total number of characters: 7 422 890 508

  • Average document length: ~15 709 characters

Token Statistics

Estimated using sentencepiece tokenizers commonly employed for French LLMs:

  • Estimated total tokens: ~1.9B tokens
  • Average number of tokens per document: ~4 120 tokens

Quality Signal: CCNet Perplexity

CCNet Perplexity is a linguistic quality indicator used to measure how close a given text is to a high-quality reference corpus, typically Wikipedia. It is commonly employed in large-scale dataset filtering pipelines for language model training, in order to identify noisy, malformed, or out-of-domain content.

In this dataset, the score is computed using the open-source implementation provided by OpenLLM-France: CCNet Perplexity Library

The value is exposed in the quality_signals field under the key ccnet_perplexity.

Score Interpretation

  • Low perplexity (~100–300) Text is linguistically close to Wikipedia:

    • well-formed syntax
    • standard vocabulary
    • coherent structure
    • low noise level
  • High perplexity (>1000) Text significantly diverges from the reference corpus:

    • poorly formatted content
    • potential noise or spam
    • OCR artifacts
    • or highly specialized vocabulary uncommon in Wikipedia

Lower perplexity indicates that the text is more likely under the Wikipedia-trained language model, and therefore closer to the reference domain.

⚠️ A high perplexity score does not necessarily imply low semantic value, but rather a linguistic distance from the Wikipedia domain.

Observations for the Wikisource FR Dataset

CCNet Perplexity Histogram

  • Median CCNet Perplexity: 298.84

This indicates that the majority of Wikisource documents exhibit a linguistic quality comparable to Wikipedia, which is consistent with the curated and editorial nature of the source.

  • Extreme values (up to ~183 437) These outliers most likely correspond to:

    • documents with highly specialized or archaic vocabulary,
    • residual formatting issues,
    • atypical content structures (tables, lists, annotations),
    • or extraction artifacts.

This signal can be leveraged to:

  • filter documents based on quality thresholds,
  • weight samples during training,
  • or analyze quality distributions within the corpus.

License

The texts originate from Wikisource and are governed by the licenses defined by the Wikimedia Foundation:

Some texts may be available only under the CC BY-SA license or may belong to the public domain. Please refer to Wikimedia Terms of Use for details.

Aknowledgements

Many thanks to the Wikimedia Foundation for providing open access to the data and maintaining a high-quality open knowledge ecosystem.

Citation

@online{wikisource_fr_dump,
    author = "LeMoussel Labs",
    title  = "French plain text of Wikisource",
    url    = "https://huggingface.co/datasets/LeMoussel/wikisource_fr"
}