Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,111 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
pretty_name: French Science Commons
|
| 3 |
+
size_categories:
|
| 4 |
+
- 1M<n<10M
|
| 5 |
+
---
|
| 6 |
+
# **French Science Commons**
|
| 7 |
+
|
| 8 |
+
**French Science Commons** (*Commun numérique des sciences en français*) brings together French-origin scientific publications under permissive licenses, covering a twenty year span from 2007 to 2026\. It comprises **1 248 860 scientific documents** — 1 189 628 articles and 59 232 thesis — indexed across multiple public access academic repositories, like HAL, OpenAlex, scientific journals, institutional repositories, and others.
|
| 9 |
+
|
| 10 |
+
The corpus is designed with versatility in mind, making it suitable for a variety of downstream applications within the research community and beyond, including developing domain-specific language models and exploring thematic patterns through visualizations.
|
| 11 |
+
|
| 12 |
+
The project is part of a broader initiative to support the discoverability of French-language science in a context of scientific overproduction dominated by English. It aims to establish shared digital commons within the Francophonie, grounded in principles of linguistic and cultural sovereignty, traceability, transparency, and scientific integrity.
|
| 13 |
+
|
| 14 |
+
## **Dataset overview**
|
| 15 |
+
|
| 16 |
+
| Property | Value |
|
| 17 |
+
| ----- | ----- |
|
| 18 |
+
| **Total words** | 15 694 707 465 |
|
| 19 |
+
| **Date range** | Documents made available on the repositories from 2007-2026 (some alredy published before) |
|
| 20 |
+
| **Sources** | HAL, OpenAlex, others (specified in the dataset per entry) |
|
| 21 |
+
| **File format** | Parquet |
|
| 22 |
+
| **Licenses** | Specified in the dataset per entry See specific section for more details |
|
| 23 |
+
|
| 24 |
+
## **Motivation**
|
| 25 |
+
|
| 26 |
+
French-language science is systematically underrepresented in large language model training corpora, which are dominated by English-language content. This corpus addresses that gap by providing a high-quality, openly licensed, and richly structured resource for:
|
| 27 |
+
|
| 28 |
+
* Training retrieval-augmented systems and specialised language models
|
| 29 |
+
* Thematic exploration through interactive semantic visualisation
|
| 30 |
+
* Classification and indexing of scientific content
|
| 31 |
+
* Scientific writing assistance, translation, and popularisation
|
| 32 |
+
* Education and training in French-language scientific domains
|
| 33 |
+
|
| 34 |
+
The multidisciplinary scope of the corpus with balanced coverage across all major disciplinary fields is a deliberate design choice to maximise adoption. We kept the “discipline” classification found in the original repositories and also grouped those disciplines into 6 supra-categories (“supra”), based on the OECD Frascati \- Research Areas classification. The 6 supra-categories are:
|
| 35 |
+
|
| 36 |
+
1. Natural sciences
|
| 37 |
+
2. Engineering and technology
|
| 38 |
+
3. Medical and health sciences
|
| 39 |
+
4. Agricultural and veterinary sciences
|
| 40 |
+
5. Social sciences
|
| 41 |
+
6. Humanities and arts
|
| 42 |
+
|
| 43 |
+
## **Dataset Structure**
|
| 44 |
+
|
| 45 |
+
The data is structured at **page level**: each row represents one page of a source document, with rich metadata at the document level (id, author, title, DOI, discipline, license) repeated across rows. Full documents can be reconstructed by grouping on `id` and ordering by `page`.
|
| 46 |
+
|
| 47 |
+
### **Data Fields**
|
| 48 |
+
|
| 49 |
+
| Field | Description |
|
| 50 |
+
| ----- | ----- |
|
| 51 |
+
| `id` | Unique document identifier (OpenAlex ID or equivalent) |
|
| 52 |
+
| `title` | Title of the document |
|
| 53 |
+
| `author` | Author(s) of the document |
|
| 54 |
+
| `publication_date` | Publication date |
|
| 55 |
+
| `doi` | Digital Object Identifier |
|
| 56 |
+
| `language` | Language of the document (e.g. `fr`, `en`) |
|
| 57 |
+
| `license` | License of the source document (e.g. `cc-by`, `cc0`) |
|
| 58 |
+
| `terms` | Additional usage terms, if any |
|
| 59 |
+
| `discipline` | Original classification found in the repository (e.g. `Sciences humaines et sociales`, `Sciences cognitives`) |
|
| 60 |
+
| `supra` | Classification based on the OECD \- Research Areas supra categories |
|
| 61 |
+
| `source` | Repository or journal of origin |
|
| 62 |
+
| `page` | Page number within the source PDF |
|
| 63 |
+
| `text` | Extracted text content of the page, in Markdown format |
|
| 64 |
+
|
| 65 |
+
## **Processing Pipeline**
|
| 66 |
+
|
| 67 |
+
A key challenge in building this corpus was converting scientific PDFs into structured text. Naive OCR approaches were insufficient given the complexity of academic layouts (multi-column text, mathematical formulas, tables, figures).
|
| 68 |
+
|
| 69 |
+
The pipeline proceeds as follows:
|
| 70 |
+
|
| 71 |
+
1. **PDF rendering** — each page of every source PDF is rendered as a raster image.
|
| 72 |
+
2. **Vision-based OCR** — pages are processed using [dots.ocr](https://huggingface.co/rednote-hilab/dots.ocr), an open-source vision-language model that produces output directly in **Markdown format**.
|
| 73 |
+
3. **Structure preservation** — the model preserves document structure including headings, subheadings, lists, tables, mathematical formulas and others.
|
| 74 |
+
|
| 75 |
+
This approach produces higher-fidelity training data compared to pipelines that extract plain, unformatted text, as it retains both semantic and syntactic relationships present in the original documents.
|
| 76 |
+
|
| 77 |
+
## **Distribution by Category**
|
| 78 |
+
|
| 79 |
+
## **Licenses**
|
| 80 |
+
|
| 81 |
+
The corpus is presented divided in two main collections: “French Open Science” and “HAL Open Access”.
|
| 82 |
+
|
| 83 |
+
**French Open Science**
|
| 84 |
+
|
| 85 |
+
This includes most French scientific publications under free licenses allowing for reuse (mostly, CC-By, CC-By-SA, CC0 and the French *Licence ouverte*). Each document keeps its own license with provenance metadata ensuring full attribution. Along with HAL we used OpenAlex to track exclusively free licensed publications from a variety of sources including EDP Sciences or Érudit.
|
| 86 |
+
|
| 87 |
+
**HAL Open Access**
|
| 88 |
+
|
| 89 |
+
The dataset has been extracted from the HAL's open archive which distributes scientific publications following open access principles. It follows on from the Halvest project from Almanach and is made available under the same conditions: the corpus is made up of both creative commons licensed and copyrighted documents (distribution authorized on HAL by the publisher). This must be considered prior to using this dataset for any purpose, other than training deep learning models, data mining etc. We do not own any of the text from which this data has been extracted. The column “terms” provide a summary of these conditions for each document.
|
| 90 |
+
|
| 91 |
+
## **Acknowledgements**
|
| 92 |
+
|
| 93 |
+
We thank our partners for their invaluable support and contributions:
|
| 94 |
+
|
| 95 |
+
* French Ministry of Culture
|
| 96 |
+
* OPERAS — European research infrastructure partner
|
| 97 |
+
* Chaire de recherche du Québec sur la découvrabilité des contenus scientifiques en français — research partner
|
| 98 |
+
* Délégation générale à la langue française et aux langues de France (DGLFLF) — funding support
|
| 99 |
+
|
| 100 |
+
All documents have been processed using the open weights VLM OCR model developed by Rednote-Hilab, \[[https://huggingface.co/rednote-hilab/dots.ocr](https://huggingface.co/rednote-hilab/dots.ocr)\](dots.ocr).
|
| 101 |
+
|
| 102 |
+
## **Citation**
|
| 103 |
+
|
| 104 |
+
```
|
| 105 |
+
@dataset{french_science_commons,
|
| 106 |
+
title = {French Science Commons},
|
| 107 |
+
author = {?},
|
| 108 |
+
year = {2026},
|
| 109 |
+
url = {?}
|
| 110 |
+
}
|
| 111 |
+
```
|