|
|
--- |
|
|
language: |
|
|
- fr |
|
|
tags: |
|
|
- france |
|
|
- service-public |
|
|
- demarches |
|
|
- embeddings |
|
|
- administration |
|
|
- open-data |
|
|
- government |
|
|
pretty_name: Service-Public.fr practical sheets dataset |
|
|
size_categories: |
|
|
- 10K<n<100K |
|
|
license: etalab-2.0 |
|
|
configs: |
|
|
- config_name: latest |
|
|
data_files: "data/service-public-latest/*.parquet" |
|
|
default: true |
|
|
--- |
|
|
--------------------------------------------------------------------------------------------------- |
|
|
### 📢 Sondage 2026 : Utilisation des datasets publiques de MediaTech |
|
|
Vous utilisez ce dataset ou d’autres datasets de notre collection [MediaTech](https://huggingface.co/collections/AgentPublic/mediatech) ? Votre avis compte ! |
|
|
Aidez-nous à améliorer nos datasets publiques en répondant à ce sondage rapide (5 min) : 👉 https://grist.numerique.gouv.fr/o/albert/forms/gF4hLaq9VvUog6c5aVDuMw/11 |
|
|
Merci pour votre contribution ! 🙌 |
|
|
|
|
|
--------------------------------------------------------------------------------------------------- |
|
|
# 🇫🇷 Service-Public.fr practical sheets dataset (Administrative Procedures) |
|
|
|
|
|
This dataset is derived from the official [Service-Public.fr](https://www.service-public.fr/) platform and contains practical information sheets and resources targeting both individuals (Particuliers) and entrepreneurs (Entreprendre). |
|
|
The purpose of these sheets is to provide information on administrative procedures relating to a number of themes. |
|
|
The data is publicly available on [data.gouv.fr](https://www.data.gouv.fr) and has been processed and chunked for optimized semantic retrieval and large-scale embedding use. |
|
|
|
|
|
The dataset provides semantic-ready, structured and chunked data of official content related to employment, labor law and administrative procedures. |
|
|
These chunks have been vectorized using the [`BAAI/bge-m3`](https://huggingface.co/BAAI/bge-m3) embedding model to enable semantic search and retrieval tasks. |
|
|
|
|
|
|
|
|
|
|
|
Each record represents a semantically coherent text fragment (chunk) from an original sheet, enriched with metadata and a precomputed embedding vector suitable for search and retrieval applications (e.g., RAG pipelines). |
|
|
|
|
|
--- |
|
|
|
|
|
## 🗂️ Dataset Contents |
|
|
|
|
|
The dataset is provided in **Parquet format** and includes the following columns: |
|
|
|
|
|
| Column Name | Type | Description | |
|
|
|--------------------|------------------|-----------------------------------------------------------------------------| |
|
|
| `chunk_id` | `str` | Unique generated and encoded hash of each chunk. | |
|
|
| `doc_id` | `str` | Document identifier from the source site. | |
|
|
| `chunk_index` | `int` | Index of the chunk within its original document. Starting from 1. | |
|
|
| `chunk_xxh64` | `str` | XXH64 hash of the `chunk_text` value. | |
|
|
| `audience` | `str` | Target audience: `Particuliers` and/or `Professionnels`. | |
|
|
| `theme` | `str` | Thematic categories (e.g., `Famille - Scolarité, Travail - Formation`). | |
|
|
| `title` | `str` | Title of the article. | |
|
|
| `surtitre` | `str` | Higher-lever theme of article structure. | |
|
|
| `source` | `str` | Dataset source label. (always "service-public" in this dataset). | |
|
|
| `introduction` | `str` | Introductory paragraph of the article. | |
|
|
| `url` | `str` | URL of the original article. | |
|
|
| `related_questions`| `list[dict]` | List of related questions, including their sid and URLs. | |
|
|
| `web_services` | `list[dict]` | Associated web services (if any). | |
|
|
| `context` | `list[str]` | Section names related to the chunk. | |
|
|
| `text` | `str` | Textual content extracted and chunked from a section of the article. | |
|
|
| `chunk_text` | `str` | Formated text including `title`, `context`, `introduction` and `text` values. Used for embedding. | |
|
|
| `embeddings_bge-m3`| `str` | Embedding vector of `chunk_text` using `BAAI/bge-m3` (length of 1024), stored as JSON array string | |
|
|
|
|
|
--- |
|
|
## 🛠️ Data Processing Methodology |
|
|
|
|
|
### 1. 📥 Field Extraction |
|
|
|
|
|
The following fields were extracted and/or transformed from the original XML files: |
|
|
|
|
|
- **Basic fields**: `doc_id`, `theme`, `title`,`surtitre`, `introduction`, `url`, `related_questions`, `web_services` are directly extracted from the XML files, with some processing when needed. |
|
|
- **Generated fields**: |
|
|
- `chunk_id`: is an unique generated and encoded hash for each chunk. |
|
|
- `chunk_index`: is the index of the chunk of a same document. Each document has an unique `doc_id`. |
|
|
- `chunk_xxh64`: is the xxh64 hash of the `chunk_text` value. It is useful to determine if the `chunk_text` value has changed from a version to another. |
|
|
- `source`: is always "service-public" here. |
|
|
- **Textual fields**: |
|
|
- `context`: Optional contextual hierarchy (e.g., nested sections). |
|
|
- `text`: Textual content of the article chunk. |
|
|
This is the value which corresponds to a semantically coherent fragment of textual content extracted from the XML document structure for a same `sid`. |
|
|
|
|
|
|
|
|
Column `source` is a fixed variable here because this dataset was built at the same time as the [Travail Emploi Dataset](https://huggingface.co/datasets/AgentPublic/travail-emploi). |
|
|
Both datasets were intended to be grouped together in a single vector collection, they then have differents `source` values. |
|
|
|
|
|
### 2. ✂️ Generation of 'chunk_text' |
|
|
|
|
|
The value includes the `title` and `introduction` of the article, the `context` values of the chunk and the textual content chunk `text`. |
|
|
This strategy is designed to improve semantic search for document search use cases on administrative procedures. |
|
|
|
|
|
The Langchain's `RecursiveCharacterTextSplitter` function was used to make these chunks (`text` value). The parameters used are : |
|
|
- `chunk_size` = 1500 (in order to maximize the compability of most LLMs context windows) |
|
|
- `chunk_overlap` = 20 |
|
|
- `length_function` = len |
|
|
|
|
|
### 3. 🧠 Embeddings Generation |
|
|
|
|
|
Each `chunk_text` was embedded using the [**`BAAI/bge-m3`**](https://huggingface.co/BAAI/bge-m3) model. The resulting embedding vector is stored in the `embeddings_bge-m3` column as a **string**, but can easily be parsed back into a `list[float]` or NumPy array. |
|
|
|
|
|
## 🔄 The chunking doesn't fit your use case? |
|
|
|
|
|
[**SOON AVAILABLE FOR THIS DATASET**] ~~If you need to reconstitute the original, un-chunked dataset, you can follow [this tutorial notebook available on our GitHub repository](https://github.com/etalab-ia/mediatech/blob/main/docs/reconstruct_vector_database.ipynb).~~ |
|
|
|
|
|
⚠️ The tutorial is only relevant for datasets that were chunked **without overlap**. |
|
|
|
|
|
## 📌 Embedding Use Notice |
|
|
|
|
|
⚠️ The `embeddings_bge-m3` column is stored as a **stringified list** of floats (e.g., `"[-0.03062629,-0.017049594,...]"`). |
|
|
To use it as a vector, you need to parse it into a list of floats or NumPy array. For example, if you want to load the dataset into a dataframe by using the `datasets` library: |
|
|
|
|
|
```python |
|
|
import pandas as pd |
|
|
import json |
|
|
from datasets import load_dataset |
|
|
# The Pyarrow library must be installed in your Python environment for this example. By doing => pip install pyarrow |
|
|
|
|
|
dataset = load_dataset("AgentPublic/service-public") |
|
|
df = pd.DataFrame(dataset['train']) |
|
|
df["embeddings_bge-m3"] = df["embeddings_bge-m3"].apply(json.loads) |
|
|
``` |
|
|
|
|
|
Otherwise, if you have already downloaded all parquet files from the `data/service-public-latest/` folder : |
|
|
```python |
|
|
import pandas as pd |
|
|
import json |
|
|
# The Pyarrow library must be installed in your Python environment for this example. By doing => pip install pyarrow |
|
|
|
|
|
df = pd.read_parquet(path="service-public-latest/") # Assuming that all parquet files are located into this folder |
|
|
df["embeddings_bge-m3"] = df["embeddings_bge-m3"].apply(json.loads) |
|
|
``` |
|
|
|
|
|
You can then use the dataframe as you wish, such as by inserting the data from the dataframe into the vector database of your choice. |
|
|
|
|
|
## 🐱 GitHub repository : |
|
|
The project MediaTech is open source ! You are free to contribute or see the complete code used to build the dataset by checking the [GitHub repository](https://github.com/etalab-ia/mediatech) |
|
|
|
|
|
## 📚 Source & License |
|
|
|
|
|
### 🔗 Source : |
|
|
- [Service-Public.fr official website](https://www.service-public.fr/) |
|
|
- [Data.Gouv.fr : Fiches pratiques et ressources de Service-Public.fr Particuliers ](https://www.data.gouv.fr/fr/datasets/fiches-pratiques-et-ressources-de-service-public-fr-particuliers/) |
|
|
- [Data.Gouv.fr : Fiches pratiques et ressources Entreprendre - Service-Public.fr ](https://www.data.gouv.fr/fr/datasets/fiches-pratiques-et-ressources-entreprendre-service-public-fr/) |
|
|
|
|
|
### 📄 Licence : |
|
|
**Open License (Etalab)** — This dataset is publicly available and can be reused under the conditions of the Etalab open license. |