File size: 8,785 Bytes
3432a01
 
 
 
 
 
 
 
 
 
 
31e7567
3432a01
 
 
a22edd9
 
 
 
3432a01
31e7567
3432a01
31e7567
3432a01
 
 
 
 
 
 
 
 
 
 
 
87c2dc5
 
dbc5b98
3432a01
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c9a3433
3432a01
 
 
 
 
 
 
 
 
 
a0e4315
3432a01
 
 
 
 
 
 
 
dbc5b98
3432a01
 
 
 
 
 
 
 
 
 
 
c9a3433
3432a01
c9a3433
3432a01
 
a22edd9
3432a01
a22edd9
 
 
 
 
 
 
 
 
 
 
31cc452
3432a01
 
 
072b052
3432a01
072b052
c9a3433
3432a01
 
a22edd9
 
4e0db46
 
 
0852be7
 
 
3432a01
 
 
0852be7
3432a01
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
---
language:
- fr
tags:
- france
- public-sector
- embeddings
- directory
- open-data
- government
- etalab
pretty_name: French State Administrations Directory
size_categories:
- 1K<n<10K
license: etalab-2.0
configs:
- config_name: latest
  data_files: "data/state-administrations-directory-latest/*.parquet"
  default: true
---
# 🇫🇷 French State Administrations Directory Dataset

This dataset is a processed and embedded version of the public data **Référentiel de l’organisation administrative de l’État** (French State Administrations Directory), published by **DILA** (Direction de l'information légale et administrative) on [data.gouv.fr](https://www.data.gouv.fr/fr/datasets/referentiel-de-lorganisation-administrative-de-letat/).
This information is also available on the official directory website of Service-Public.fr: https://lannuaire.service-public.fr/

The dataset provides semantic-ready, structured and chunked data of French state entities, including organizational details, missions, contact information, and hierarchical links. Each chunk of text is vectorized using the [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) embedding model to enable semantic search and retrieval tasks.

---

## 🗂️ Dataset Contents

The dataset is provided in **Parquet format** and contains the following columns:

| Column Name             | Type                        | Description                                                                 |
|------------------------|-----------------------------|-----------------------------------------------------------------------------|
| `chunk_id`             | `str`                       | Unique source based identifier of the chunk.                                |
| `doc_id`               | `str`                       | Document identifier. Identical to `chunk_id` as each document only has 1 chunk. |
| `chunk_xxh64`          | `str`                       | XXH64 hash of the `chunk_text` value.                                       |
| `types`                | `str`                       | Type(s) of administrative entity.                                           |
| `name`                 | `str`                       | Name of the organization or service.                                        |
| `mission_description`  | `str`                       | Description of the entity's mission.                                        |
| `addresses`            | `list[dict]`              | List of address objects (street, postal code, city, etc.).                    |
| `phone_numbers`        | `list[str]`                 | List of telephone numbers.                                                  |
| `mails`                | `list[str]`                 | List of contact email addresses.                                            |
| `urls`                 | `list[str]`                 | List of related URLs.                                                       |
| `social_medias`        | `list[str]`                 | Social media accounts.                                                      |
| `mobile_applications`  | `list[str]`                 | Related mobile applications.                                                |
| `opening_hours`        | `str`                       | Opening hours.                                                              |
| `contact_forms`        | `list[str]`                 | Contact form URLs.                                                          |
| `additional_information` | `str`                     | Additional information.                                                     |
| `modification_date`    | `str`                       | Last update date.                                                           |
| `siret`                | `str`                       | SIRET number.                                                               |
| `siren`                | `str`                       | SIREN number.                                                               |
| `people_in_charge`     | `list[dict]`              | List of responsible persons.                                                  |
| `organizational_chart` | `list[str]`                 | Organization chart references.                                              |
| `hierarchy`            | `list[dict]`              | Links to parent or child entities.                                            |
| `directory_url`        | `str`                       | Source URL from the official state directory website.                       |
| `chunk_text`           | `str`                       | Textual content of the administrative chunk.                                |
| `embeddings_bge-m3`           | `str` (stringified list)    | Embeddings of `chunk_text` using `BAAI/bge-m3`. Stored as a JSON array string. |

---


## 🛠️ Data Processing Methodology

### 📥 1. Field Extraction

The following fields were extracted and/or transformed from the original JSON:

- **Basic fields**: `chunk_id`, `doc_id`, `name`, `types`, `mission_description`, `additional_information`, `siret`, `siren`, `directory_url`, `modification_date` are directly extracted from JSON attributes.
- **Structured lists**:
  - `addresses`: list of dictionaries with `adresse`, `code_postal`, `commune`, `pays`, `longitude`, and `latitude`.
  - `phone_numbers`, `mails`, `urls`, `social_medias`, `mobile_applications`, `contact_forms`: derived from their respective fields with formatting.
- **People and structure**:
  - `people_in_charge`: list of dictionaries representing staff members or leadership (title, name, rank, etc.).
  - `organizational_chart`, `hierarchy`: structural information within the administration.
- **Other fields**:
  - `opening_hours`: built using a custom function that parses declared time slots into readable strings.
  - `chunk_xxh64`: is the xxh64 hash of the `chunk_text` value. It is useful to determine if the `chunk_text` value has changed from a version to another.

### ✂️ 2. Generation of `chunk_text`

A synthetic text field called `chunk_text` was created to summarize key aspects of each administrative body. This field is designed for semantic search and embedding generation. It includes:

- The entity’s name : `name`
- Its mission statement (if available) : `mission_description`
- Key responsible individuals (formatted using role, title, name, and rank) : `people_in_charge`

There was no need here to split characters here.

### 🧠 3. Embeddings Generation

Each `chunk_text` was embedded using the [**`BAAI/bge-m3`**](https://huggingface.co/BAAI/bge-m3) model. The resulting embedding vector is stored in the `embeddings_bge-m3` column as a **string**, but can easily be parsed back into a `list[float]` or NumPy array.

## 📌 Embeddings Notice
⚠️ The `embeddings_bge-m3` column is stored as a stringified list (e.g., `"[-0.03062629,-0.017049594,...]"`). To use it as a vector, you need to parse it into a list of floats or NumPy array. For example, if you want to load the dataset into a dataframe by using the `datasets` library:

```python
import pandas as pd
import json
from datasets import load_dataset
# The Pyarrow library must be installed in your Python environment for this example. By doing => pip install pyarrow

dataset = load_dataset("AgentPublic/state-administrations-directory")
df = pd.DataFrame(dataset['train'])
df["embeddings_bge-m3"] = df["embeddings_bge-m3"].apply(json.loads)
```

Otherwise, if you have already downloaded all parquet files from the `data/state-administrations-directory-latest/` folder : 
```python
import pandas as pd
import json
# The Pyarrow library must be installed in your Python environment for this example. By doing => pip install pyarrow

df = pd.read_parquet(path="state-administrations-directory-latest/") # Assuming that all parquet files are located into this folder
df["embeddings_bge-m3"] = df["embeddings_bge-m3"].apply(json.loads)
```

You can then use the dataframe as you wish, such as by inserting the data from the dataframe into the vector database of your choice.

## 🐱 GitHub repository : 
The project MediaTech is open source ! You are free to contribute or see the complete code used to build the dataset by checking the [GitHub repository](https://github.com/etalab-ia/mediatech)

## 📚 Source & License

### 🔗 Source : 
- [Lannuaire.Service-Public.fr](https://lannuaire.service-public.fr/)
- [Data.Gouv.fr : Référentiel de l’organisation administrative de l’État](https://www.data.gouv.fr/fr/datasets/referentiel-de-lorganisation-administrative-de-letat/)

### 📄 Licence : 
**Open License (Etalab)** — This dataset is publicly available and can be reused under the conditions of the Etalab open license.