carlosrosas's picture
Update README.md
616f4e3 verified
|
raw
history blame
7.02 kB
metadata
pretty_name: French Science Commons
size_categories:
  - 1M<n<10M

French Science Commons

French Science Commons (Commun numérique des sciences en français) brings together French-origin scientific publications under permissive licenses, covering a twenty year span from 2007 to 2026. It comprises 1 248 860 scientific documents — 1 189 628 articles and 59 232 thesis — indexed across multiple public access academic repositories, like HAL, OpenAlex, scientific journals, institutional repositories, and others.

The corpus is designed with versatility in mind, making it suitable for a variety of downstream applications within the research community and beyond, including developing domain-specific language models and exploring thematic patterns through visualizations.

The project is part of a broader initiative to support the discoverability of French-language science in a context of scientific overproduction dominated by English. It aims to establish shared digital commons within the Francophonie, grounded in principles of linguistic and cultural sovereignty, traceability, transparency, and scientific integrity.

Dataset overview

Property Value
Total words 15 694 707 465
Date range Documents made available on the repositories from 2007-2026 (some published before this range)
Sources HAL, OpenAlex, others (specified in the dataset per entry)
File format Parquet
Licenses Specified in the dataset per entry See specific section for more details

Motivation

French-language science is systematically underrepresented in large language model training corpora, which are dominated by English-language content. This corpus addresses that gap by providing a high-quality, openly licensed, and richly structured resource for:

  • Training retrieval-augmented systems and specialised language models
  • Thematic exploration through interactive semantic visualisation
  • Classification and indexing of scientific content
  • Scientific writing assistance, translation, and popularisation
  • Education and training in French-language scientific domains

The multidisciplinary scope of the corpus with balanced coverage across all major disciplinary fields is a deliberate design choice to maximise adoption. We kept the “discipline” classification found in the original repositories and also grouped those disciplines into 6 supra-categories (“supra”), based on the OECD Frascati - Research Areas classification. The 6 supra-categories are:

  1. Natural sciences
  2. Engineering and technology
  3. Medical and health sciences
  4. Agricultural and veterinary sciences
  5. Social sciences
  6. Humanities and arts

Dataset Structure

The data is structured at page level: each row represents one page of a source document, with rich metadata at the document level (id, author, title, DOI, discipline, license) repeated across rows. Full documents can be reconstructed by grouping on id and ordering by page.

Data Fields

Field Description
id Unique document identifier (OpenAlex ID or equivalent)
title Title of the document
author Author(s) of the document
publication_date Publication date
doi Digital Object Identifier
language Language of the document (e.g. fr, en)
license License of the source document (e.g. cc-by, cc0)
terms Additional usage terms, if any
discipline Original classification found in the repository (e.g. Sciences humaines et sociales, Sciences cognitives)
supra Classification based on the OECD - Research Areas supra categories
source Repository or journal of origin
page Page number within the source PDF
text Extracted text content of the page, in Markdown format

Processing Pipeline

A key challenge in building this corpus was converting scientific PDFs into structured text. Naive OCR approaches were insufficient given the complexity of academic layouts (multi-column text, mathematical formulas, tables, figures).

The pipeline proceeds as follows:

  1. PDF rendering — each page of every source PDF is rendered as a raster image.
  2. Vision-based OCR — pages are processed using dots.ocr, an open-source vision-language model that produces output directly in Markdown format.
  3. Structure preservation — the model preserves document structure including headings, subheadings, lists, tables, mathematical formulas and others.

This approach produces higher-fidelity training data compared to pipelines that extract plain, unformatted text, as it retains both semantic and syntactic relationships present in the original documents.

Distribution by Category

Corpus Distribution

Corpus constitution by supra-category

Licenses

The corpus is presented divided in two main collections: “French Open Science” and “HAL Open Access”.

French Open Science

This includes most French scientific publications under free licenses allowing for reuse (mostly, CC-By, CC-By-SA, CC0 and the French Licence ouverte). Each document keeps its own license with provenance metadata ensuring full attribution. Along with HAL we used OpenAlex to track exclusively free licensed publications from a variety of sources including EDP Sciences or Érudit.

HAL Open Access

The dataset has been extracted from the HAL's open archive which distributes scientific publications following open access principles. It follows on from the Halvest project from Almanach and is made available under the same conditions: the corpus is made up of both creative commons licensed and copyrighted documents (distribution authorized on HAL by the publisher). This must be considered prior to using this dataset for any purpose, other than training deep learning models, data mining etc. We do not own any of the text from which this data has been extracted. The column “terms” provide a summary of these conditions for each document.

Acknowledgements

We thank our partners for their invaluable support and contributions:

  • French Ministry of Culture
  • OPERAS — European research infrastructure partner
  • Chaire de recherche du Québec sur la découvrabilité des contenus scientifiques en français — research partner
  • Délégation générale à la langue française et aux langues de France (DGLFLF) — funding support

All documents have been processed using the open weights VLM OCR model developed by Rednote-Hilab, [https://huggingface.co/rednote-hilab/dots.ocr](dots.ocr).

Citation

@dataset{french_science_commons,
  title   = {French Science Commons},
  author  = {?},
  year    = {2026},
  url     = {?}
}