File size: 9,969 Bytes
b6848f6
 
 
 
 
 
 
 
 
 
 
 
 
02c4c0b
b6848f6
48f5af5
 
 
 
b6848f6
5428cd1
 
 
 
 
b6848f6
5428cd1
b6848f6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9db35cc
6363e18
9db35cc
b6848f6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9db35cc
b6848f6
 
9db35cc
 
b6848f6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6381eae
b6848f6
 
 
 
 
 
 
 
 
cea715a
 
34f08fd
16ddcb2
6381eae
16ddcb2
 
 
34f08fd
cea715a
 
 
34f08fd
b6848f6
 
cea715a
 
 
b6848f6
48f5af5
 
 
 
 
 
 
 
 
 
cea715a
48f5af5
b6848f6
 
 
e53b1dc
b6848f6
e53b1dc
b6848f6
 
 
48f5af5
 
b6848f6
 
0c83de5
 
 
b6848f6
 
 
 
 
 
 
02c4c0b
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
---
language:
- fr
tags:
- france
- legislation
- law
- embeddings
- open-data
- government
- parlement
pretty_name: French Legislative Dossiers Dataset (DOLE)
size_categories:
- 1K<n<10K
license: etalab-2.0
configs:
- config_name: latest
  data_files: "data/dole-latest/*.parquet"
  default: true
---
---------------------------------------------------------------------------------------------------
### 📢 Sondage 2026 : Utilisation des datasets publiques de MediaTech
Vous utilisez ce dataset ou d’autres datasets de notre collection [MediaTech](https://huggingface.co/collections/AgentPublic/mediatech) ? Votre avis compte ! 
Aidez-nous à améliorer nos datasets publiques en répondant à ce sondage rapide (5 min) : 👉 https://grist.numerique.gouv.fr/o/albert/forms/gF4hLaq9VvUog6c5aVDuMw/11 
Merci pour votre contribution ! 🙌

---------------------------------------------------------------------------------------------------
# 🇫🇷 French Legislative Dossiers Dataset (DOLE)

This dataset provides a semantic-ready, chunked and embedded version of the **Dossiers Législatifs** ("DOLE") published by the French government. It includes all **laws promulgated since the XIIᵉ legislature (June 2002)**, **ordinances**, and **legislative proposals** under preparation.
The original data is downloaded from [the dedicated **DILA** open data repository](https://echanges.dila.gouv.fr/OPENDATA/DOLE) and is also published on [data.gouv.fr](https://www.data.gouv.fr/datasets/dole-les-dossiers-legislatifs/).

Each article is chunked and vectorized using the [`BAAI/bge-m3`](https://huggingface.co/BAAI/bge-m3) embedding model, enabling use in **semantic search**, **retrieval-augmented generation (RAG)**, and **legal research** systems for example.

---

## 🗂️ Dataset Contents

The dataset is available in **Parquet format** and contains the following columns:

| Column Name         | Type             | Description                                                                 |
|---------------------|------------------|-----------------------------------------------------------------------------|
| `chunk_id`          | `str`            | Unique identifier for each chunk.                                           |
| `doc_id`            | `str`            | Document identifier from the source site.                                   |
| `chunk_index`       | `int`            | Index of the chunk within its original document. Starting from 1.           |
| `chunk_xxh64`       | `str`            | XXH64 hash of the `chunk_text` value.                                       |
| `category`          | `str`            | Type of dossier (e.g., `LOI_PUBLIEE`, `PROJET_LOI`, etc.).                  |
| `content_type`      | `str`            | Nature of the content: `article`, `dossier_content`, or `explanatory_memorandum`. |
| `title`             | `str`            | Title summarizing the subject matter.                                       |
| `number`            | `str`            | Internal document number.                                                   |
| `wording`           | `str`            | Libelle, Legislature reference (e.g., `XIVème législature`).                |
| `creation_date`     | `str`            | Creation or publication date (YYYY-MM-DD).                                  |
| `article_number`    | `int` or `null`  | Article number if applicable.                                               |
| `article_title`     | `str` or `null`  | Optional title of the article.                                              |
| `article_synthesis` | `str` or `null`  | Optional synthesis of the article.                                          |
| `text`              | `str` or `null`  | Text content of the explanatory_memorandum, article or file content (contenu du dossier) chunk.|
| `chunk_text`        | `str`            | Concatenated text (`title` + `article_text` or related content).            |
| `embeddings_bge-m3` | `str`            | Embedding vector of `chunk_text` using `BAAI/bge-m3`, stored as JSON string.|

---

## 🛠️ Data Processing Methodology

### 🧩 1. Content Extraction

Each **dossier législatif** was parsed, processed and standardized from his official XML structure. 
Metadata, article blocks, and explanatory sections were normalized into a unified schema. 
Specific rules applied per content type:

- `explanatory_memorandum`: Includes the explanatory's introduction only. All articles synthesis that are in the explanatory are split by their `article_number` and added to `article_synthesis`.
  Article fields are `null`.
  An explanatory memorandum (exposé des motifs) is an official text that accompanies a draft or proposed law.
  It is used to explain the reasons why the law is being proposed, the context in which it is set, and the objectives pursued by the legislator.
- `dossier_content`: Includes dossier's textual content if the split by article didn't work.
  The split may not work if there is no mention of article numbers in the dossier content or if the code was not adapted to a specific case in which the split wasn't possible.
  Article metadata fields are `null`.
- `article`: Structured content, where `article_number` and `text` are always present. `article_title` and `article_synthesis` may be missing.

- **Basic fields**: `doc_id` (cid), `category`, `title`, `number`, `wording`, `creation_date`, were taken directly from the source XML file.
- **Generated fields**:
  - `chunk_id`: A unique hash for each text chunk.
  - `chunk_index`: Indicates the order of a chunk within a same deliberation.
  - `chunk_xxh64`: is the xxh64 hash of the `chunk_text` value. It is useful to determine if the `chunk_text` value has changed from a version to another.
  - `content_type`: Nature of the content.
  - `article_number`: Number of the article. Available only if `content_type` is `article`.
  - `article_title`: Title of the article. Available only if `content_type` is `article`.
  - `article_synthesis`: Synthesis of the article extracted from the explanatory memorandum. Available only if `content_type` is `article`.
- **Textual fields**:
  - `text`: Chunk of the main text content.
    It can be an article text content extracted from the dossier content, a chunk of the explanatory memorandum's introduction or a chunk from the dossier content.
  - `chunk_text`: Combines `title` and the main `text` body to maximize embedding relevance.
    If `content_type` is `article`, then the article number is also added.
    
### ✂️ 2. Chunk Generation

A `chunk_text` was built by combining the `title`, the `article_number` if applicable and its corresponding `text` content section. 
Chunking ensures semantic granularity for embedding purposes.

No recursive split was necessary as legal articles and memos are inherently structured and relatively short for `content_type` = `article`. 
If needed, the Langchain's `RecursiveCharacterTextSplitter` function was used to make these chunks (`text` value). The parameters used are : 

- `chunk_size` = 8000
- `chunk_overlap` = 0
- `length_function` = len

---

### 🧠 3. Embeddings Generation

Each `chunk_text` was embedded using the [**`BAAI/bge-m3`**](https://huggingface.co/BAAI/bge-m3) model. 
The resulting embedding vector is stored in the `embeddings_bge-m3` column as a **string**, but can easily be parsed back into a `list[float]` or NumPy array.

## 🎓 Tutorials

### 🔄 1. The chunking doesn't fit your use case?

If you need to reconstitute the original, un-chunked dataset, you can follow [this tutorial notebook available on our GitHub repository](https://github.com/etalab-ia/mediatech/blob/main/docs/reconstruct_vector_database.ipynb).

⚠️ The tutorial is only relevant for datasets that were chunked **without overlap**.

### 🤖 2. How to load MediaTech's datasets from Hugging Face and use them in a RAG pipeline ?

To learn how to load MediaTech's datasets from Hugging Face and integrate them into a Retrieval-Augmented Generation (RAG) pipeline, check out our [step-by-step RAG tutorial available on our GitHub repository !](https://github.com/etalab-ia/mediatech/blob/main/docs/hugging_face_rag_tutorial.ipynb)

### 📌 3. Embedding Use Notice

⚠️ The `embeddings_bge-m3` column is stored as a **stringified list** of floats (e.g., `"[-0.03062629,-0.017049594,...]"`). 
To use it as a vector, you need to parse it into a list of floats or NumPy array. 

#### Using the `datasets` library:

```python
import pandas as pd
import json
from datasets import load_dataset
# The Pyarrow library must be installed in your Python environment for this example. By doing => pip install pyarrow

dataset = load_dataset("AgentPublic/dole")
df = pd.DataFrame(dataset['train'])
df["embeddings_bge-m3"] = df["embeddings_bge-m3"].apply(json.loads)
```
#### Using downloaded local Parquet files:

```python
import pandas as pd
import json
# The Pyarrow library must be installed in your Python environment for this example. By doing => pip install pyarrow

df = pd.read_parquet(path="dole-latest/") # Assuming that all parquet files are located into this folder
df["embeddings_bge-m3"] = df["embeddings_bge-m3"].apply(json.loads)
```

You can then use the dataframe as you wish, such as by inserting the data from the dataframe into the vector database of your choice.

---

## 🐱 GitHub repository : 
The project MediaTech is open source ! You are free to contribute or see the complete code used to build the dataset by checking the [GitHub repository](https://github.com/etalab-ia/mediatech)

## 📚 Source & License

### 🔗 Source:
- [**DILA** open data repository](https://echanges.dila.gouv.fr/OPENDATA/DOLE)
- [Data.gouv.fr : DOLE : les dossiers législatifs ](https://www.data.gouv.fr/datasets/dole-les-dossiers-legislatifs/)

### 📄 License:
**Open License (Etalab)** — This dataset is publicly available and reusable under the Etalab open license.