service-public / README.md
FaheemBEG's picture
Update README.md
672d86f verified
metadata
language:
  - fr
tags:
  - france
  - service-public
  - demarches
  - embeddings
  - administration
  - open-data
  - government
pretty_name: Service-Public.fr practical sheets dataset
size_categories:
  - 10K<n<100K
license: etalab-2.0
configs:
  - config_name: latest
    data_files: data/service-public-latest/*.parquet
    default: true

๐Ÿ“ข Sondage 2026 : Utilisation des datasets publiques de MediaTech

Vous utilisez ce dataset ou dโ€™autres datasets de notre collection MediaTech ? Votre avis compte ! Aidez-nous ร  amรฉliorer nos datasets publiques en rรฉpondant ร  ce sondage rapide (5 min) : ๐Ÿ‘‰ https://grist.numerique.gouv.fr/o/albert/forms/gF4hLaq9VvUog6c5aVDuMw/11 Merci pour votre contribution ! ๐Ÿ™Œ


๐Ÿ‡ซ๐Ÿ‡ท Service-Public.fr practical sheets dataset (Administrative Procedures)

This dataset is derived from the official Service-Public.fr platform and contains practical information sheets and resources targeting both individuals (Particuliers) and entrepreneurs (Entreprendre). The purpose of these sheets is to provide information on administrative procedures relating to a number of themes. The data is publicly available on data.gouv.fr and has been processed and chunked for optimized semantic retrieval and large-scale embedding use.

The dataset provides semantic-ready, structured and chunked data of official content related to employment, labor law and administrative procedures. These chunks have been vectorized using the BAAI/bge-m3 embedding model to enable semantic search and retrieval tasks.

Each record represents a semantically coherent text fragment (chunk) from an original sheet, enriched with metadata and a precomputed embedding vector suitable for search and retrieval applications (e.g., RAG pipelines).


๐Ÿ—‚๏ธ Dataset Contents

The dataset is provided in Parquet format and includes the following columns:

Column Name Type Description
chunk_id str Unique generated and encoded hash of each chunk.
doc_id str Document identifier from the source site.
chunk_index int Index of the chunk within its original document. Starting from 1.
chunk_xxh64 str XXH64 hash of the chunk_text value.
audience str Target audience: Particuliers and/or Professionnels.
theme str Thematic categories (e.g., Famille - Scolaritรฉ, Travail - Formation).
title str Title of the article.
surtitre str Higher-lever theme of article structure.
source str Dataset source label. (always "service-public" in this dataset).
introduction str Introductory paragraph of the article.
url str URL of the original article.
related_questions list[dict] List of related questions, including their sid and URLs.
web_services list[dict] Associated web services (if any).
context list[str] Section names related to the chunk.
text str Textual content extracted and chunked from a section of the article.
chunk_text str Formated text including title, context, introduction and text values. Used for embedding.
embeddings_bge-m3 str Embedding vector of chunk_text using BAAI/bge-m3 (length of 1024), stored as JSON array string

๐Ÿ› ๏ธ Data Processing Methodology

1. ๐Ÿ“ฅ Field Extraction

The following fields were extracted and/or transformed from the original XML files:

  • Basic fields: doc_id, theme, title,surtitre, introduction, url, related_questions, web_services are directly extracted from the XML files, with some processing when needed.
  • Generated fields:
    • chunk_id: is an unique generated and encoded hash for each chunk.
    • chunk_index: is the index of the chunk of a same document. Each document has an unique doc_id.
    • chunk_xxh64: is the xxh64 hash of the chunk_text value. It is useful to determine if the chunk_text value has changed from a version to another.
    • source: is always "service-public" here.
  • Textual fields:
    • context: Optional contextual hierarchy (e.g., nested sections).
    • text: Textual content of the article chunk. This is the value which corresponds to a semantically coherent fragment of textual content extracted from the XML document structure for a same sid.

Column source is a fixed variable here because this dataset was built at the same time as the Travail Emploi Dataset. Both datasets were intended to be grouped together in a single vector collection, they then have differents source values.

2. โœ‚๏ธ Generation of 'chunk_text'

The value includes the title and introduction of the article, the context values of the chunk and the textual content chunk text. This strategy is designed to improve semantic search for document search use cases on administrative procedures.

The Langchain's RecursiveCharacterTextSplitter function was used to make these chunks (text value). The parameters used are :

  • chunk_size = 1500 (in order to maximize the compability of most LLMs context windows)
  • chunk_overlap = 20
  • length_function = len

3. ๐Ÿง  Embeddings Generation

Each chunk_text was embedded using the BAAI/bge-m3 model. The resulting embedding vector is stored in the embeddings_bge-m3 column as a string, but can easily be parsed back into a list[float] or NumPy array.

๐Ÿ”„ The chunking doesn't fit your use case?

[SOON AVAILABLE FOR THIS DATASET] If you need to reconstitute the original, un-chunked dataset, you can follow this tutorial notebook available on our GitHub repository.

โš ๏ธ The tutorial is only relevant for datasets that were chunked without overlap.

๐Ÿ“Œ Embedding Use Notice

โš ๏ธ The embeddings_bge-m3 column is stored as a stringified list of floats (e.g., "[-0.03062629,-0.017049594,...]"). To use it as a vector, you need to parse it into a list of floats or NumPy array. For example, if you want to load the dataset into a dataframe by using the datasets library:

import pandas as pd
import json
from datasets import load_dataset
# The Pyarrow library must be installed in your Python environment for this example. By doing => pip install pyarrow

dataset = load_dataset("AgentPublic/service-public")
df = pd.DataFrame(dataset['train'])
df["embeddings_bge-m3"] = df["embeddings_bge-m3"].apply(json.loads)

Otherwise, if you have already downloaded all parquet files from the data/service-public-latest/ folder :

import pandas as pd
import json
# The Pyarrow library must be installed in your Python environment for this example. By doing => pip install pyarrow

df = pd.read_parquet(path="service-public-latest/") # Assuming that all parquet files are located into this folder
df["embeddings_bge-m3"] = df["embeddings_bge-m3"].apply(json.loads)

You can then use the dataframe as you wish, such as by inserting the data from the dataframe into the vector database of your choice.

๐Ÿฑ GitHub repository :

The project MediaTech is open source ! You are free to contribute or see the complete code used to build the dataset by checking the GitHub repository

๐Ÿ“š Source & License

๐Ÿ”— Source :

๐Ÿ“„ Licence :

Open License (Etalab) โ€” This dataset is publicly available and can be reused under the conditions of the Etalab open license.