Datasets:
File size: 9,548 Bytes
232cccf 218a9e6 232cccf 68a71bd 232cccf cc69793 232cccf cc69793 218a9e6 232cccf 717c4c1 67e028e 717c4c1 232cccf 27758d3 232cccf 717c4c1 232cccf bc0c9dc 717c4c1 232cccf e60b345 1cd99c0 232cccf 27758d3 232cccf 27758d3 232cccf 8e2ee27 ff17bc2 672d86f 1cd99c0 672d86f ff17bc2 8e2ee27 ff17bc2 232cccf 27758d3 8e2ee27 232cccf 68a71bd 8e2ee27 68a71bd 232cccf 677cd24 232cccf 677cd24 27758d3 232cccf 68a71bd c443525 505c41f 232cccf | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 | ---
language:
- fr
tags:
- france
- service-public
- demarches
- embeddings
- administration
- open-data
- government
pretty_name: Service-Public.fr practical sheets dataset
size_categories:
- 10K<n<100K
license: etalab-2.0
configs:
- config_name: latest
data_files: "data/service-public-latest/*.parquet"
default: true
---
---------------------------------------------------------------------------------------------------
### 📢 Sondage 2026 : Utilisation des datasets publiques de MediaTech
Vous utilisez ce dataset ou d’autres datasets de notre collection [MediaTech](https://huggingface.co/collections/AgentPublic/mediatech) ? Votre avis compte !
Aidez-nous à améliorer nos datasets publiques en répondant à ce sondage rapide (5 min) : 👉 https://grist.numerique.gouv.fr/o/albert/forms/gF4hLaq9VvUog6c5aVDuMw/11
Merci pour votre contribution ! 🙌
---------------------------------------------------------------------------------------------------
# 🇫🇷 Service-Public.fr practical sheets dataset (Administrative Procedures)
This dataset is derived from the official [Service-Public.fr](https://www.service-public.fr/) platform and contains practical information sheets and resources targeting both individuals (Particuliers) and entrepreneurs (Entreprendre).
The purpose of these sheets is to provide information on administrative procedures relating to a number of themes.
The data is publicly available on [data.gouv.fr](https://www.data.gouv.fr) and has been processed and chunked for optimized semantic retrieval and large-scale embedding use.
The dataset provides semantic-ready, structured and chunked data of official content related to employment, labor law and administrative procedures.
These chunks have been vectorized using the [`BAAI/bge-m3`](https://huggingface.co/BAAI/bge-m3) embedding model to enable semantic search and retrieval tasks.
Each record represents a semantically coherent text fragment (chunk) from an original sheet, enriched with metadata and a precomputed embedding vector suitable for search and retrieval applications (e.g., RAG pipelines).
---
## 🗂️ Dataset Contents
The dataset is provided in **Parquet format** and includes the following columns:
| Column Name | Type | Description |
|--------------------|------------------|-----------------------------------------------------------------------------|
| `chunk_id` | `str` | Unique generated and encoded hash of each chunk. |
| `doc_id` | `str` | Document identifier from the source site. |
| `chunk_index` | `int` | Index of the chunk within its original document. Starting from 1. |
| `chunk_xxh64` | `str` | XXH64 hash of the `chunk_text` value. |
| `audience` | `str` | Target audience: `Particuliers` and/or `Professionnels`. |
| `theme` | `str` | Thematic categories (e.g., `Famille - Scolarité, Travail - Formation`). |
| `title` | `str` | Title of the article. |
| `surtitre` | `str` | Higher-lever theme of article structure. |
| `source` | `str` | Dataset source label. (always "service-public" in this dataset). |
| `introduction` | `str` | Introductory paragraph of the article. |
| `url` | `str` | URL of the original article. |
| `related_questions`| `list[dict]` | List of related questions, including their sid and URLs. |
| `web_services` | `list[dict]` | Associated web services (if any). |
| `context` | `list[str]` | Section names related to the chunk. |
| `text` | `str` | Textual content extracted and chunked from a section of the article. |
| `chunk_text` | `str` | Formated text including `title`, `context`, `introduction` and `text` values. Used for embedding. |
| `embeddings_bge-m3`| `str` | Embedding vector of `chunk_text` using `BAAI/bge-m3` (length of 1024), stored as JSON array string |
---
## 🛠️ Data Processing Methodology
### 1. 📥 Field Extraction
The following fields were extracted and/or transformed from the original XML files:
- **Basic fields**: `doc_id`, `theme`, `title`,`surtitre`, `introduction`, `url`, `related_questions`, `web_services` are directly extracted from the XML files, with some processing when needed.
- **Generated fields**:
- `chunk_id`: is an unique generated and encoded hash for each chunk.
- `chunk_index`: is the index of the chunk of a same document. Each document has an unique `doc_id`.
- `chunk_xxh64`: is the xxh64 hash of the `chunk_text` value. It is useful to determine if the `chunk_text` value has changed from a version to another.
- `source`: is always "service-public" here.
- **Textual fields**:
- `context`: Optional contextual hierarchy (e.g., nested sections).
- `text`: Textual content of the article chunk.
This is the value which corresponds to a semantically coherent fragment of textual content extracted from the XML document structure for a same `sid`.
Column `source` is a fixed variable here because this dataset was built at the same time as the [Travail Emploi Dataset](https://huggingface.co/datasets/AgentPublic/travail-emploi).
Both datasets were intended to be grouped together in a single vector collection, they then have differents `source` values.
### 2. ✂️ Generation of 'chunk_text'
The value includes the `title` and `introduction` of the article, the `context` values of the chunk and the textual content chunk `text`.
This strategy is designed to improve semantic search for document search use cases on administrative procedures.
The Langchain's `RecursiveCharacterTextSplitter` function was used to make these chunks (`text` value). The parameters used are :
- `chunk_size` = 1024
- `chunk_overlap` = 0
- `length_function` = bge_m3_tokenizer
### 3. 🧠 Embeddings Generation
Each `chunk_text` was embedded using the [**`BAAI/bge-m3`**](https://huggingface.co/BAAI/bge-m3) model. The resulting embedding vector is stored in the `embeddings_bge-m3` column as a **string**, but can easily be parsed back into a `list[float]` or NumPy array.
## 🎓 Tutorials
### 🔄 1. The chunking doesn't fit your use case?
If you need to reconstitute the original, un-chunked dataset, you can follow [this tutorial notebook available on our GitHub repository](https://github.com/etalab-ia/mediatech/blob/main/docs/reconstruct_vector_database.ipynb).
⚠️ The tutorial is only relevant for datasets that were chunked **without overlap**.
### 🤖 2. How to load MediaTech's datasets from Hugging Face and use them in a RAG pipeline ?
To learn how to load MediaTech's datasets from Hugging Face and integrate them into a Retrieval-Augmented Generation (RAG) pipeline, check out our [step-by-step RAG tutorial available on our GitHub repository !](https://github.com/etalab-ia/mediatech/blob/main/docs/hugging_face_rag_tutorial.ipynb)
### 📌 3. Embedding Use Notice
⚠️ The `embeddings_bge-m3` column is stored as a **stringified list** of floats (e.g., `"[-0.03062629,-0.017049594,...]"`).
To use it as a vector, you need to parse it into a list of floats or NumPy array.
#### Using the `datasets` library:
```python
import pandas as pd
import json
from datasets import load_dataset
# The Pyarrow library must be installed in your Python environment for this example. By doing => pip install pyarrow
dataset = load_dataset("AgentPublic/service-public")
df = pd.DataFrame(dataset['train'])
df["embeddings_bge-m3"] = df["embeddings_bge-m3"].apply(json.loads)
```
#### Using downloaded local Parquet files:
```python
import pandas as pd
import json
# The Pyarrow library must be installed in your Python environment for this example. By doing => pip install pyarrow
df = pd.read_parquet(path="service-public-latest/") # Assuming that all parquet files are located into this folder
df["embeddings_bge-m3"] = df["embeddings_bge-m3"].apply(json.loads)
```
You can then use the dataframe as you wish, such as by inserting the data from the dataframe into the vector database of your choice.
## 🐱 GitHub repository :
The project MediaTech is open source ! You are free to contribute or see the complete code used to build the dataset by checking the [GitHub repository](https://github.com/etalab-ia/mediatech)
## 📚 Source & License
### 🔗 Source :
- [Service-Public.fr official website](https://www.service-public.fr/)
- [Data.Gouv.fr : Fiches pratiques et ressources de Service-Public.fr Particuliers ](https://www.data.gouv.fr/fr/datasets/fiches-pratiques-et-ressources-de-service-public-fr-particuliers/)
- [Data.Gouv.fr : Fiches pratiques et ressources Entreprendre - Service-Public.fr ](https://www.data.gouv.fr/fr/datasets/fiches-pratiques-et-ressources-entreprendre-service-public-fr/)
### 📄 Licence :
**Open License (Etalab)** — This dataset is publicly available and can be reused under the conditions of the Etalab open license. |